00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 509 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3174 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.098 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.206 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.206 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.774 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.785 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.795 Checking out Revision ea7646cba2e992b05bb6a53407de7fbcf465b5c6 (FETCH_HEAD) 00:00:04.795 > git config core.sparsecheckout # timeout=10 00:00:04.807 > git read-tree -mu HEAD # timeout=10 00:00:04.821 > git checkout -f ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=5 00:00:04.842 Commit message: "ansible/inventory: Fix GP16's BMC address" 00:00:04.842 > git rev-list --no-walk ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=10 00:00:04.945 [Pipeline] Start of Pipeline 00:00:04.958 [Pipeline] library 00:00:04.960 Loading library shm_lib@master 00:00:04.960 Library shm_lib@master is cached. Copying from home. 00:00:04.981 [Pipeline] node 00:00:04.989 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.991 [Pipeline] { 00:00:05.001 [Pipeline] catchError 00:00:05.002 [Pipeline] { 00:00:05.012 [Pipeline] wrap 00:00:05.021 [Pipeline] { 00:00:05.027 [Pipeline] stage 00:00:05.028 [Pipeline] { (Prologue) 00:00:05.192 [Pipeline] sh 00:00:05.480 + logger -p user.info -t JENKINS-CI 00:00:05.499 [Pipeline] echo 00:00:05.501 Node: CYP12 00:00:05.510 [Pipeline] sh 00:00:05.816 [Pipeline] setCustomBuildProperty 00:00:05.827 [Pipeline] echo 00:00:05.829 Cleanup processes 00:00:05.833 [Pipeline] sh 00:00:06.121 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.121 1147732 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.135 [Pipeline] sh 00:00:06.422 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.422 ++ grep -v 'sudo pgrep' 00:00:06.422 ++ awk '{print $1}' 00:00:06.422 + sudo kill -9 00:00:06.422 + true 00:00:06.436 [Pipeline] cleanWs 00:00:06.446 [WS-CLEANUP] Deleting project workspace... 00:00:06.446 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.453 [WS-CLEANUP] done 00:00:06.457 [Pipeline] setCustomBuildProperty 00:00:06.470 [Pipeline] sh 00:00:06.754 + sudo git config --global --replace-all safe.directory '*' 00:00:06.803 [Pipeline] nodesByLabel 00:00:06.804 Found a total of 2 nodes with the 'sorcerer' label 00:00:06.811 [Pipeline] httpRequest 00:00:06.816 HttpMethod: GET 00:00:06.816 URL: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:06.822 Sending request to url: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:06.841 Response Code: HTTP/1.1 200 OK 00:00:06.841 Success: Status code 200 is in the accepted range: 200,404 00:00:06.841 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:30.876 [Pipeline] sh 00:00:31.167 + tar --no-same-owner -xf jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:31.184 [Pipeline] httpRequest 00:00:31.189 HttpMethod: GET 00:00:31.190 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:31.190 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:31.195 Response Code: HTTP/1.1 200 OK 00:00:31.195 Success: Status code 200 is in the accepted range: 200,404 00:00:31.196 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:47.670 [Pipeline] sh 00:00:47.956 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:50.511 [Pipeline] sh 00:00:50.799 + git -C spdk log --oneline -n5 00:00:50.799 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:00:50.799 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:00:50.799 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:50.799 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:00:50.799 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:00:50.819 [Pipeline] withCredentials 00:00:50.863 > git --version # timeout=10 00:00:50.876 > git --version # 'git version 2.39.2' 00:00:50.899 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:50.901 [Pipeline] { 00:00:50.911 [Pipeline] retry 00:00:50.912 [Pipeline] { 00:00:50.930 [Pipeline] sh 00:00:51.441 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:55.660 [Pipeline] } 00:00:55.683 [Pipeline] // retry 00:00:55.688 [Pipeline] } 00:00:55.709 [Pipeline] // withCredentials 00:00:55.721 [Pipeline] httpRequest 00:00:55.726 HttpMethod: GET 00:00:55.727 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:55.730 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:55.734 Response Code: HTTP/1.1 200 OK 00:00:55.735 Success: Status code 200 is in the accepted range: 200,404 00:00:55.736 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:10.830 [Pipeline] sh 00:01:11.118 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.047 [Pipeline] sh 00:01:13.335 + git -C dpdk log --oneline -n5 00:01:13.335 eeb0605f11 version: 23.11.0 00:01:13.335 238778122a doc: update release notes for 23.11 00:01:13.336 46aa6b3cfc doc: fix description of RSS features 00:01:13.336 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:13.336 7e421ae345 devtools: support skipping forbid rule check 00:01:13.348 [Pipeline] } 00:01:13.367 [Pipeline] // stage 00:01:13.377 [Pipeline] stage 00:01:13.379 [Pipeline] { (Prepare) 00:01:13.402 [Pipeline] writeFile 00:01:13.420 [Pipeline] sh 00:01:13.708 + logger -p user.info -t JENKINS-CI 00:01:13.721 [Pipeline] sh 00:01:14.009 + logger -p user.info -t JENKINS-CI 00:01:14.022 [Pipeline] sh 00:01:14.310 + cat autorun-spdk.conf 00:01:14.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.310 SPDK_TEST_NVMF=1 00:01:14.310 SPDK_TEST_NVME_CLI=1 00:01:14.310 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.310 SPDK_TEST_NVMF_NICS=e810 00:01:14.310 SPDK_TEST_VFIOUSER=1 00:01:14.310 SPDK_RUN_UBSAN=1 00:01:14.310 NET_TYPE=phy 00:01:14.311 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:14.311 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:14.319 RUN_NIGHTLY=1 00:01:14.324 [Pipeline] readFile 00:01:14.351 [Pipeline] withEnv 00:01:14.353 [Pipeline] { 00:01:14.366 [Pipeline] sh 00:01:14.652 + set -ex 00:01:14.652 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:14.652 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.652 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.652 ++ SPDK_TEST_NVMF=1 00:01:14.652 ++ SPDK_TEST_NVME_CLI=1 00:01:14.652 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.652 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.652 ++ SPDK_TEST_VFIOUSER=1 00:01:14.652 ++ SPDK_RUN_UBSAN=1 00:01:14.652 ++ NET_TYPE=phy 00:01:14.652 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:14.652 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:14.652 ++ RUN_NIGHTLY=1 00:01:14.652 + case $SPDK_TEST_NVMF_NICS in 00:01:14.652 + DRIVERS=ice 00:01:14.652 + [[ tcp == \r\d\m\a ]] 00:01:14.652 + [[ -n ice ]] 00:01:14.652 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:14.652 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:14.652 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:14.652 rmmod: ERROR: Module irdma is not currently loaded 00:01:14.652 rmmod: ERROR: Module i40iw is not currently loaded 00:01:14.652 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:14.652 + true 00:01:14.652 + for D in $DRIVERS 00:01:14.652 + sudo modprobe ice 00:01:14.652 + exit 0 00:01:14.662 [Pipeline] } 00:01:14.682 [Pipeline] // withEnv 00:01:14.687 [Pipeline] } 00:01:14.707 [Pipeline] // stage 00:01:14.717 [Pipeline] catchError 00:01:14.719 [Pipeline] { 00:01:14.735 [Pipeline] timeout 00:01:14.736 Timeout set to expire in 50 min 00:01:14.737 [Pipeline] { 00:01:14.754 [Pipeline] stage 00:01:14.757 [Pipeline] { (Tests) 00:01:14.774 [Pipeline] sh 00:01:15.063 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.063 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.063 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.063 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:15.063 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.063 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:15.063 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:15.063 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:15.063 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:15.063 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:15.063 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:15.063 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.063 + source /etc/os-release 00:01:15.063 ++ NAME='Fedora Linux' 00:01:15.063 ++ VERSION='38 (Cloud Edition)' 00:01:15.063 ++ ID=fedora 00:01:15.063 ++ VERSION_ID=38 00:01:15.063 ++ VERSION_CODENAME= 00:01:15.063 ++ PLATFORM_ID=platform:f38 00:01:15.063 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:15.063 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.063 ++ LOGO=fedora-logo-icon 00:01:15.064 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:15.064 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.064 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:15.064 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.064 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.064 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.064 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:15.064 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.064 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:15.064 ++ SUPPORT_END=2024-05-14 00:01:15.064 ++ VARIANT='Cloud Edition' 00:01:15.064 ++ VARIANT_ID=cloud 00:01:15.064 + uname -a 00:01:15.064 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:15.064 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:17.609 Hugepages 00:01:17.609 node hugesize free / total 00:01:17.870 node0 1048576kB 0 / 0 00:01:17.870 node0 2048kB 0 / 0 00:01:17.870 node1 1048576kB 0 / 0 00:01:17.870 node1 2048kB 0 / 0 00:01:17.870 00:01:17.870 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.870 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:17.870 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:17.870 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:17.870 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:17.870 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:17.870 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:17.870 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:17.870 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:17.870 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:17.870 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:17.870 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:17.870 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:17.870 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:17.870 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:17.870 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:17.870 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:17.870 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:17.870 + rm -f /tmp/spdk-ld-path 00:01:17.870 + source autorun-spdk.conf 00:01:17.870 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.870 ++ SPDK_TEST_NVMF=1 00:01:17.870 ++ SPDK_TEST_NVME_CLI=1 00:01:17.870 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.870 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.870 ++ SPDK_TEST_VFIOUSER=1 00:01:17.870 ++ SPDK_RUN_UBSAN=1 00:01:17.870 ++ NET_TYPE=phy 00:01:17.870 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:17.870 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:17.870 ++ RUN_NIGHTLY=1 00:01:17.870 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.870 + [[ -n '' ]] 00:01:17.870 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.870 + for M in /var/spdk/build-*-manifest.txt 00:01:17.870 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.870 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:17.870 + for M in /var/spdk/build-*-manifest.txt 00:01:17.870 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.870 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:17.870 ++ uname 00:01:17.870 + [[ Linux == \L\i\n\u\x ]] 00:01:17.870 + sudo dmesg -T 00:01:18.132 + sudo dmesg --clear 00:01:18.132 + dmesg_pid=1148773 00:01:18.132 + [[ Fedora Linux == FreeBSD ]] 00:01:18.132 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.132 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.132 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.132 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.132 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.132 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.132 + sudo dmesg -Tw 00:01:18.132 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.132 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.132 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.132 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.132 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.132 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.132 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.132 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.132 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.132 Test configuration: 00:01:18.132 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.132 SPDK_TEST_NVMF=1 00:01:18.132 SPDK_TEST_NVME_CLI=1 00:01:18.132 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.132 SPDK_TEST_NVMF_NICS=e810 00:01:18.132 SPDK_TEST_VFIOUSER=1 00:01:18.132 SPDK_RUN_UBSAN=1 00:01:18.132 NET_TYPE=phy 00:01:18.132 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:18.132 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.132 RUN_NIGHTLY=1 11:55:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.132 11:55:31 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.132 11:55:31 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.132 11:55:31 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.132 11:55:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.133 11:55:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.133 11:55:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.133 11:55:31 -- paths/export.sh@5 -- $ export PATH 00:01:18.133 11:55:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.133 11:55:31 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.133 11:55:31 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:18.133 11:55:31 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718099731.XXXXXX 00:01:18.133 11:55:31 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718099731.GRGJcA 00:01:18.133 11:55:31 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:18.133 11:55:31 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:01:18.133 11:55:31 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.133 11:55:31 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:18.133 11:55:31 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.133 11:55:31 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.133 11:55:31 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:18.133 11:55:31 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:18.133 11:55:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.133 11:55:31 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:18.133 11:55:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.133 11:55:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.133 11:55:31 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.133 11:55:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.133 Tue Jun 11 09:55:31 AM UTC 2024 00:01:18.133 11:55:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:18.133 LTS-43-g130b9406a 00:01:18.133 11:55:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:18.133 11:55:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:18.133 11:55:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:18.133 11:55:31 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:18.133 11:55:31 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:18.133 11:55:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.133 ************************************ 00:01:18.133 START TEST ubsan 00:01:18.133 ************************************ 00:01:18.133 11:55:31 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:18.133 using ubsan 00:01:18.133 00:01:18.133 real 0m0.001s 00:01:18.133 user 0m0.001s 00:01:18.133 sys 0m0.000s 00:01:18.133 11:55:31 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:18.133 11:55:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.133 ************************************ 00:01:18.133 END TEST ubsan 00:01:18.133 ************************************ 00:01:18.133 11:55:31 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:18.133 11:55:31 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:18.133 11:55:31 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:18.133 11:55:31 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:18.133 11:55:31 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:18.133 11:55:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.394 ************************************ 00:01:18.394 START TEST build_native_dpdk 00:01:18.394 ************************************ 00:01:18.394 11:55:31 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:18.394 11:55:31 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:18.394 11:55:31 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:18.394 11:55:31 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:18.394 11:55:31 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:18.394 11:55:31 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:18.394 11:55:31 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:18.394 11:55:31 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:18.394 11:55:31 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:18.394 11:55:31 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:18.394 11:55:31 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:18.394 11:55:31 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:18.394 11:55:31 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:18.394 11:55:31 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.394 11:55:31 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.394 11:55:31 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:18.394 11:55:31 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.394 11:55:31 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:18.394 eeb0605f11 version: 23.11.0 00:01:18.394 238778122a doc: update release notes for 23.11 00:01:18.394 46aa6b3cfc doc: fix description of RSS features 00:01:18.394 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:18.394 7e421ae345 devtools: support skipping forbid rule check 00:01:18.394 11:55:31 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:18.394 11:55:31 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:18.394 11:55:31 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:18.394 11:55:31 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:18.394 11:55:31 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:18.394 11:55:31 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:18.394 11:55:31 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:18.394 11:55:31 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:18.394 11:55:31 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:18.394 11:55:31 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:18.394 11:55:31 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:18.394 11:55:31 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:18.394 11:55:31 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:18.394 11:55:31 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:18.394 11:55:31 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:18.394 11:55:31 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:18.394 11:55:31 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:18.394 11:55:31 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:18.394 11:55:31 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:18.394 11:55:31 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:18.394 11:55:31 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:18.394 11:55:31 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:18.394 11:55:31 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:18.394 11:55:31 -- scripts/common.sh@343 -- $ case "$op" in 00:01:18.394 11:55:31 -- scripts/common.sh@344 -- $ : 1 00:01:18.394 11:55:31 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:18.394 11:55:31 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:18.394 11:55:31 -- scripts/common.sh@364 -- $ decimal 23 00:01:18.394 11:55:31 -- scripts/common.sh@352 -- $ local d=23 00:01:18.394 11:55:31 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:18.394 11:55:31 -- scripts/common.sh@354 -- $ echo 23 00:01:18.394 11:55:31 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:18.394 11:55:31 -- scripts/common.sh@365 -- $ decimal 21 00:01:18.394 11:55:31 -- scripts/common.sh@352 -- $ local d=21 00:01:18.394 11:55:31 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:18.394 11:55:31 -- scripts/common.sh@354 -- $ echo 21 00:01:18.394 11:55:31 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:18.394 11:55:31 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:18.394 11:55:31 -- scripts/common.sh@366 -- $ return 1 00:01:18.394 11:55:31 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:18.394 patching file config/rte_config.h 00:01:18.394 Hunk #1 succeeded at 60 (offset 1 line). 00:01:18.394 11:55:31 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:18.394 11:55:31 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:18.394 11:55:31 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:18.394 11:55:31 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:18.394 11:55:31 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:23.678 The Meson build system 00:01:23.678 Version: 1.3.1 00:01:23.678 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:23.678 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:23.678 Build type: native build 00:01:23.678 Program cat found: YES (/usr/bin/cat) 00:01:23.678 Project name: DPDK 00:01:23.678 Project version: 23.11.0 00:01:23.678 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.678 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:23.678 Host machine cpu family: x86_64 00:01:23.678 Host machine cpu: x86_64 00:01:23.678 Message: ## Building in Developer Mode ## 00:01:23.678 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:23.678 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:23.678 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:23.678 Program python3 found: YES (/usr/bin/python3) 00:01:23.678 Program cat found: YES (/usr/bin/cat) 00:01:23.678 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:23.678 Compiler for C supports arguments -march=native: YES 00:01:23.678 Checking for size of "void *" : 8 00:01:23.678 Checking for size of "void *" : 8 (cached) 00:01:23.678 Library m found: YES 00:01:23.678 Library numa found: YES 00:01:23.678 Has header "numaif.h" : YES 00:01:23.678 Library fdt found: NO 00:01:23.678 Library execinfo found: NO 00:01:23.678 Has header "execinfo.h" : YES 00:01:23.678 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.678 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:23.679 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:23.679 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:23.679 Run-time dependency openssl found: YES 3.0.9 00:01:23.679 Run-time dependency libpcap found: YES 1.10.4 00:01:23.679 Has header "pcap.h" with dependency libpcap: YES 00:01:23.679 Compiler for C supports arguments -Wcast-qual: YES 00:01:23.679 Compiler for C supports arguments -Wdeprecated: YES 00:01:23.679 Compiler for C supports arguments -Wformat: YES 00:01:23.679 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:23.679 Compiler for C supports arguments -Wformat-security: NO 00:01:23.679 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.679 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:23.679 Compiler for C supports arguments -Wnested-externs: YES 00:01:23.679 Compiler for C supports arguments -Wold-style-definition: YES 00:01:23.679 Compiler for C supports arguments -Wpointer-arith: YES 00:01:23.679 Compiler for C supports arguments -Wsign-compare: YES 00:01:23.679 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:23.679 Compiler for C supports arguments -Wundef: YES 00:01:23.679 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.679 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:23.679 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:23.679 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.679 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:23.679 Program objdump found: YES (/usr/bin/objdump) 00:01:23.679 Compiler for C supports arguments -mavx512f: YES 00:01:23.679 Checking if "AVX512 checking" compiles: YES 00:01:23.679 Fetching value of define "__SSE4_2__" : 1 00:01:23.679 Fetching value of define "__AES__" : 1 00:01:23.679 Fetching value of define "__AVX__" : 1 00:01:23.679 Fetching value of define "__AVX2__" : 1 00:01:23.679 Fetching value of define "__AVX512BW__" : 1 00:01:23.679 Fetching value of define "__AVX512CD__" : 1 00:01:23.679 Fetching value of define "__AVX512DQ__" : 1 00:01:23.679 Fetching value of define "__AVX512F__" : 1 00:01:23.679 Fetching value of define "__AVX512VL__" : 1 00:01:23.679 Fetching value of define "__PCLMUL__" : 1 00:01:23.679 Fetching value of define "__RDRND__" : 1 00:01:23.679 Fetching value of define "__RDSEED__" : 1 00:01:23.679 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:23.679 Fetching value of define "__znver1__" : (undefined) 00:01:23.679 Fetching value of define "__znver2__" : (undefined) 00:01:23.679 Fetching value of define "__znver3__" : (undefined) 00:01:23.679 Fetching value of define "__znver4__" : (undefined) 00:01:23.679 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:23.679 Message: lib/log: Defining dependency "log" 00:01:23.679 Message: lib/kvargs: Defining dependency "kvargs" 00:01:23.679 Message: lib/telemetry: Defining dependency "telemetry" 00:01:23.679 Checking for function "getentropy" : NO 00:01:23.679 Message: lib/eal: Defining dependency "eal" 00:01:23.679 Message: lib/ring: Defining dependency "ring" 00:01:23.679 Message: lib/rcu: Defining dependency "rcu" 00:01:23.679 Message: lib/mempool: Defining dependency "mempool" 00:01:23.679 Message: lib/mbuf: Defining dependency "mbuf" 00:01:23.679 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:23.679 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:23.679 Compiler for C supports arguments -mpclmul: YES 00:01:23.679 Compiler for C supports arguments -maes: YES 00:01:23.679 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:23.679 Compiler for C supports arguments -mavx512bw: YES 00:01:23.679 Compiler for C supports arguments -mavx512dq: YES 00:01:23.679 Compiler for C supports arguments -mavx512vl: YES 00:01:23.679 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:23.679 Compiler for C supports arguments -mavx2: YES 00:01:23.679 Compiler for C supports arguments -mavx: YES 00:01:23.679 Message: lib/net: Defining dependency "net" 00:01:23.679 Message: lib/meter: Defining dependency "meter" 00:01:23.679 Message: lib/ethdev: Defining dependency "ethdev" 00:01:23.679 Message: lib/pci: Defining dependency "pci" 00:01:23.679 Message: lib/cmdline: Defining dependency "cmdline" 00:01:23.679 Message: lib/metrics: Defining dependency "metrics" 00:01:23.679 Message: lib/hash: Defining dependency "hash" 00:01:23.679 Message: lib/timer: Defining dependency "timer" 00:01:23.679 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:23.679 Message: lib/acl: Defining dependency "acl" 00:01:23.679 Message: lib/bbdev: Defining dependency "bbdev" 00:01:23.679 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:23.679 Run-time dependency libelf found: YES 0.190 00:01:23.679 Message: lib/bpf: Defining dependency "bpf" 00:01:23.679 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:23.679 Message: lib/compressdev: Defining dependency "compressdev" 00:01:23.679 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:23.679 Message: lib/distributor: Defining dependency "distributor" 00:01:23.679 Message: lib/dmadev: Defining dependency "dmadev" 00:01:23.679 Message: lib/efd: Defining dependency "efd" 00:01:23.679 Message: lib/eventdev: Defining dependency "eventdev" 00:01:23.679 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:23.679 Message: lib/gpudev: Defining dependency "gpudev" 00:01:23.679 Message: lib/gro: Defining dependency "gro" 00:01:23.679 Message: lib/gso: Defining dependency "gso" 00:01:23.679 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:23.679 Message: lib/jobstats: Defining dependency "jobstats" 00:01:23.679 Message: lib/latencystats: Defining dependency "latencystats" 00:01:23.679 Message: lib/lpm: Defining dependency "lpm" 00:01:23.679 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512IFMA__" : 1 00:01:23.679 Message: lib/member: Defining dependency "member" 00:01:23.679 Message: lib/pcapng: Defining dependency "pcapng" 00:01:23.679 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:23.679 Message: lib/power: Defining dependency "power" 00:01:23.679 Message: lib/rawdev: Defining dependency "rawdev" 00:01:23.679 Message: lib/regexdev: Defining dependency "regexdev" 00:01:23.679 Message: lib/mldev: Defining dependency "mldev" 00:01:23.679 Message: lib/rib: Defining dependency "rib" 00:01:23.679 Message: lib/reorder: Defining dependency "reorder" 00:01:23.679 Message: lib/sched: Defining dependency "sched" 00:01:23.679 Message: lib/security: Defining dependency "security" 00:01:23.679 Message: lib/stack: Defining dependency "stack" 00:01:23.679 Has header "linux/userfaultfd.h" : YES 00:01:23.679 Has header "linux/vduse.h" : YES 00:01:23.679 Message: lib/vhost: Defining dependency "vhost" 00:01:23.679 Message: lib/ipsec: Defining dependency "ipsec" 00:01:23.679 Message: lib/pdcp: Defining dependency "pdcp" 00:01:23.679 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:23.679 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:23.679 Message: lib/fib: Defining dependency "fib" 00:01:23.679 Message: lib/port: Defining dependency "port" 00:01:23.679 Message: lib/pdump: Defining dependency "pdump" 00:01:23.679 Message: lib/table: Defining dependency "table" 00:01:23.679 Message: lib/pipeline: Defining dependency "pipeline" 00:01:23.679 Message: lib/graph: Defining dependency "graph" 00:01:23.679 Message: lib/node: Defining dependency "node" 00:01:23.679 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:23.679 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:23.679 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:24.628 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:24.628 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:24.628 Compiler for C supports arguments -Wno-unused-value: YES 00:01:24.628 Compiler for C supports arguments -Wno-format: YES 00:01:24.628 Compiler for C supports arguments -Wno-format-security: YES 00:01:24.628 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:24.628 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:24.628 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:24.629 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:24.629 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:24.629 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:24.629 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.629 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:24.629 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:24.629 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:24.629 Has header "sys/epoll.h" : YES 00:01:24.629 Program doxygen found: YES (/usr/bin/doxygen) 00:01:24.629 Configuring doxy-api-html.conf using configuration 00:01:24.629 Configuring doxy-api-man.conf using configuration 00:01:24.629 Program mandb found: YES (/usr/bin/mandb) 00:01:24.629 Program sphinx-build found: NO 00:01:24.629 Configuring rte_build_config.h using configuration 00:01:24.629 Message: 00:01:24.629 ================= 00:01:24.629 Applications Enabled 00:01:24.629 ================= 00:01:24.629 00:01:24.629 apps: 00:01:24.629 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:24.629 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:24.629 test-pmd, test-regex, test-sad, test-security-perf, 00:01:24.629 00:01:24.629 Message: 00:01:24.629 ================= 00:01:24.629 Libraries Enabled 00:01:24.629 ================= 00:01:24.629 00:01:24.629 libs: 00:01:24.629 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:24.629 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:24.629 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:24.629 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:24.629 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:24.629 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:24.629 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:24.629 00:01:24.629 00:01:24.629 Message: 00:01:24.629 =============== 00:01:24.629 Drivers Enabled 00:01:24.629 =============== 00:01:24.629 00:01:24.629 common: 00:01:24.629 00:01:24.629 bus: 00:01:24.629 pci, vdev, 00:01:24.629 mempool: 00:01:24.629 ring, 00:01:24.629 dma: 00:01:24.629 00:01:24.629 net: 00:01:24.629 i40e, 00:01:24.629 raw: 00:01:24.629 00:01:24.629 crypto: 00:01:24.629 00:01:24.629 compress: 00:01:24.629 00:01:24.629 regex: 00:01:24.629 00:01:24.629 ml: 00:01:24.629 00:01:24.629 vdpa: 00:01:24.629 00:01:24.629 event: 00:01:24.629 00:01:24.629 baseband: 00:01:24.629 00:01:24.629 gpu: 00:01:24.629 00:01:24.629 00:01:24.629 Message: 00:01:24.629 ================= 00:01:24.629 Content Skipped 00:01:24.629 ================= 00:01:24.629 00:01:24.629 apps: 00:01:24.629 00:01:24.629 libs: 00:01:24.629 00:01:24.629 drivers: 00:01:24.629 common/cpt: not in enabled drivers build config 00:01:24.629 common/dpaax: not in enabled drivers build config 00:01:24.629 common/iavf: not in enabled drivers build config 00:01:24.629 common/idpf: not in enabled drivers build config 00:01:24.629 common/mvep: not in enabled drivers build config 00:01:24.629 common/octeontx: not in enabled drivers build config 00:01:24.629 bus/auxiliary: not in enabled drivers build config 00:01:24.629 bus/cdx: not in enabled drivers build config 00:01:24.629 bus/dpaa: not in enabled drivers build config 00:01:24.629 bus/fslmc: not in enabled drivers build config 00:01:24.629 bus/ifpga: not in enabled drivers build config 00:01:24.629 bus/platform: not in enabled drivers build config 00:01:24.629 bus/vmbus: not in enabled drivers build config 00:01:24.629 common/cnxk: not in enabled drivers build config 00:01:24.629 common/mlx5: not in enabled drivers build config 00:01:24.629 common/nfp: not in enabled drivers build config 00:01:24.629 common/qat: not in enabled drivers build config 00:01:24.629 common/sfc_efx: not in enabled drivers build config 00:01:24.629 mempool/bucket: not in enabled drivers build config 00:01:24.629 mempool/cnxk: not in enabled drivers build config 00:01:24.629 mempool/dpaa: not in enabled drivers build config 00:01:24.629 mempool/dpaa2: not in enabled drivers build config 00:01:24.629 mempool/octeontx: not in enabled drivers build config 00:01:24.629 mempool/stack: not in enabled drivers build config 00:01:24.629 dma/cnxk: not in enabled drivers build config 00:01:24.629 dma/dpaa: not in enabled drivers build config 00:01:24.629 dma/dpaa2: not in enabled drivers build config 00:01:24.629 dma/hisilicon: not in enabled drivers build config 00:01:24.629 dma/idxd: not in enabled drivers build config 00:01:24.629 dma/ioat: not in enabled drivers build config 00:01:24.629 dma/skeleton: not in enabled drivers build config 00:01:24.629 net/af_packet: not in enabled drivers build config 00:01:24.629 net/af_xdp: not in enabled drivers build config 00:01:24.629 net/ark: not in enabled drivers build config 00:01:24.629 net/atlantic: not in enabled drivers build config 00:01:24.629 net/avp: not in enabled drivers build config 00:01:24.629 net/axgbe: not in enabled drivers build config 00:01:24.629 net/bnx2x: not in enabled drivers build config 00:01:24.629 net/bnxt: not in enabled drivers build config 00:01:24.629 net/bonding: not in enabled drivers build config 00:01:24.629 net/cnxk: not in enabled drivers build config 00:01:24.629 net/cpfl: not in enabled drivers build config 00:01:24.629 net/cxgbe: not in enabled drivers build config 00:01:24.629 net/dpaa: not in enabled drivers build config 00:01:24.629 net/dpaa2: not in enabled drivers build config 00:01:24.629 net/e1000: not in enabled drivers build config 00:01:24.629 net/ena: not in enabled drivers build config 00:01:24.629 net/enetc: not in enabled drivers build config 00:01:24.629 net/enetfec: not in enabled drivers build config 00:01:24.629 net/enic: not in enabled drivers build config 00:01:24.629 net/failsafe: not in enabled drivers build config 00:01:24.629 net/fm10k: not in enabled drivers build config 00:01:24.629 net/gve: not in enabled drivers build config 00:01:24.629 net/hinic: not in enabled drivers build config 00:01:24.629 net/hns3: not in enabled drivers build config 00:01:24.629 net/iavf: not in enabled drivers build config 00:01:24.629 net/ice: not in enabled drivers build config 00:01:24.629 net/idpf: not in enabled drivers build config 00:01:24.629 net/igc: not in enabled drivers build config 00:01:24.629 net/ionic: not in enabled drivers build config 00:01:24.629 net/ipn3ke: not in enabled drivers build config 00:01:24.629 net/ixgbe: not in enabled drivers build config 00:01:24.629 net/mana: not in enabled drivers build config 00:01:24.629 net/memif: not in enabled drivers build config 00:01:24.629 net/mlx4: not in enabled drivers build config 00:01:24.629 net/mlx5: not in enabled drivers build config 00:01:24.629 net/mvneta: not in enabled drivers build config 00:01:24.629 net/mvpp2: not in enabled drivers build config 00:01:24.629 net/netvsc: not in enabled drivers build config 00:01:24.629 net/nfb: not in enabled drivers build config 00:01:24.629 net/nfp: not in enabled drivers build config 00:01:24.629 net/ngbe: not in enabled drivers build config 00:01:24.629 net/null: not in enabled drivers build config 00:01:24.629 net/octeontx: not in enabled drivers build config 00:01:24.629 net/octeon_ep: not in enabled drivers build config 00:01:24.629 net/pcap: not in enabled drivers build config 00:01:24.629 net/pfe: not in enabled drivers build config 00:01:24.629 net/qede: not in enabled drivers build config 00:01:24.629 net/ring: not in enabled drivers build config 00:01:24.629 net/sfc: not in enabled drivers build config 00:01:24.629 net/softnic: not in enabled drivers build config 00:01:24.629 net/tap: not in enabled drivers build config 00:01:24.629 net/thunderx: not in enabled drivers build config 00:01:24.629 net/txgbe: not in enabled drivers build config 00:01:24.629 net/vdev_netvsc: not in enabled drivers build config 00:01:24.629 net/vhost: not in enabled drivers build config 00:01:24.629 net/virtio: not in enabled drivers build config 00:01:24.629 net/vmxnet3: not in enabled drivers build config 00:01:24.629 raw/cnxk_bphy: not in enabled drivers build config 00:01:24.629 raw/cnxk_gpio: not in enabled drivers build config 00:01:24.629 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:24.629 raw/ifpga: not in enabled drivers build config 00:01:24.629 raw/ntb: not in enabled drivers build config 00:01:24.629 raw/skeleton: not in enabled drivers build config 00:01:24.629 crypto/armv8: not in enabled drivers build config 00:01:24.629 crypto/bcmfs: not in enabled drivers build config 00:01:24.629 crypto/caam_jr: not in enabled drivers build config 00:01:24.629 crypto/ccp: not in enabled drivers build config 00:01:24.629 crypto/cnxk: not in enabled drivers build config 00:01:24.629 crypto/dpaa_sec: not in enabled drivers build config 00:01:24.629 crypto/dpaa2_sec: not in enabled drivers build config 00:01:24.629 crypto/ipsec_mb: not in enabled drivers build config 00:01:24.629 crypto/mlx5: not in enabled drivers build config 00:01:24.629 crypto/mvsam: not in enabled drivers build config 00:01:24.629 crypto/nitrox: not in enabled drivers build config 00:01:24.629 crypto/null: not in enabled drivers build config 00:01:24.629 crypto/octeontx: not in enabled drivers build config 00:01:24.629 crypto/openssl: not in enabled drivers build config 00:01:24.629 crypto/scheduler: not in enabled drivers build config 00:01:24.629 crypto/uadk: not in enabled drivers build config 00:01:24.629 crypto/virtio: not in enabled drivers build config 00:01:24.629 compress/isal: not in enabled drivers build config 00:01:24.629 compress/mlx5: not in enabled drivers build config 00:01:24.629 compress/octeontx: not in enabled drivers build config 00:01:24.629 compress/zlib: not in enabled drivers build config 00:01:24.629 regex/mlx5: not in enabled drivers build config 00:01:24.629 regex/cn9k: not in enabled drivers build config 00:01:24.629 ml/cnxk: not in enabled drivers build config 00:01:24.629 vdpa/ifc: not in enabled drivers build config 00:01:24.629 vdpa/mlx5: not in enabled drivers build config 00:01:24.629 vdpa/nfp: not in enabled drivers build config 00:01:24.629 vdpa/sfc: not in enabled drivers build config 00:01:24.629 event/cnxk: not in enabled drivers build config 00:01:24.629 event/dlb2: not in enabled drivers build config 00:01:24.629 event/dpaa: not in enabled drivers build config 00:01:24.629 event/dpaa2: not in enabled drivers build config 00:01:24.630 event/dsw: not in enabled drivers build config 00:01:24.630 event/opdl: not in enabled drivers build config 00:01:24.630 event/skeleton: not in enabled drivers build config 00:01:24.630 event/sw: not in enabled drivers build config 00:01:24.630 event/octeontx: not in enabled drivers build config 00:01:24.630 baseband/acc: not in enabled drivers build config 00:01:24.630 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:24.630 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:24.630 baseband/la12xx: not in enabled drivers build config 00:01:24.630 baseband/null: not in enabled drivers build config 00:01:24.630 baseband/turbo_sw: not in enabled drivers build config 00:01:24.630 gpu/cuda: not in enabled drivers build config 00:01:24.630 00:01:24.630 00:01:24.630 Build targets in project: 215 00:01:24.630 00:01:24.630 DPDK 23.11.0 00:01:24.630 00:01:24.630 User defined options 00:01:24.630 libdir : lib 00:01:24.630 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.630 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:24.630 c_link_args : 00:01:24.630 enable_docs : false 00:01:24.630 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:24.630 enable_kmods : false 00:01:24.630 machine : native 00:01:24.630 tests : false 00:01:24.630 00:01:24.630 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.630 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:24.630 11:55:37 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:24.630 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:24.630 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.630 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.630 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.630 [4/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.630 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.899 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.899 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.899 [8/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.899 [9/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.899 [10/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.899 [11/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.899 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.899 [13/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.899 [14/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.899 [15/705] Linking static target lib/librte_kvargs.a 00:01:24.899 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.899 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.899 [18/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.899 [19/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.899 [20/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.899 [21/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.899 [22/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.899 [23/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.899 [24/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.899 [25/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.899 [26/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.899 [27/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.899 [28/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.899 [29/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:24.899 [30/705] Linking static target lib/librte_pci.a 00:01:24.899 [31/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.899 [32/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.899 [33/705] Linking static target lib/librte_log.a 00:01:25.158 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:25.158 [35/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:25.158 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:25.158 [37/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.158 [38/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.420 [39/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:25.420 [40/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:25.420 [41/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:25.420 [42/705] Linking static target lib/librte_cfgfile.a 00:01:25.420 [43/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:25.420 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:25.420 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:25.420 [46/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.420 [47/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:25.420 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:25.420 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:25.420 [50/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:25.420 [51/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.420 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:25.420 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:25.420 [54/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.420 [55/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:25.420 [56/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:25.420 [57/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:25.420 [58/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.420 [59/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:25.420 [60/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:25.420 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:25.420 [62/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:25.420 [63/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:25.420 [64/705] Linking static target lib/librte_meter.a 00:01:25.420 [65/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:25.420 [66/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:25.420 [67/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:25.420 [68/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:25.420 [69/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:25.420 [70/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:25.420 [71/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:25.420 [72/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:25.420 [73/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.420 [74/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:25.420 [75/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:25.420 [76/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:25.420 [77/705] Linking static target lib/librte_ring.a 00:01:25.420 [78/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.420 [79/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:25.420 [80/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:25.420 [81/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.420 [82/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:25.420 [83/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.420 [84/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:25.420 [85/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:25.420 [86/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:25.420 [87/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:25.680 [88/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.680 [89/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:25.680 [90/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:25.680 [91/705] Linking static target lib/librte_cmdline.a 00:01:25.680 [92/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:25.680 [93/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:25.680 [94/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:25.680 [95/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:25.680 [96/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.680 [97/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:25.680 [98/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:25.680 [99/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:25.680 [100/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.680 [101/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:25.680 [102/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:25.680 [103/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:25.680 [104/705] Linking static target lib/librte_metrics.a 00:01:25.680 [105/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:25.680 [106/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:25.680 [107/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:25.680 [108/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.680 [109/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:25.680 [110/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:25.680 [111/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:25.680 [112/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:25.680 [113/705] Linking static target lib/librte_bitratestats.a 00:01:25.680 [114/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:25.680 [115/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:25.680 [116/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.680 [117/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:25.680 [118/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.680 [119/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.680 [120/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:25.680 [121/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.680 [122/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:25.680 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.680 [124/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.680 [125/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.680 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.680 [127/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:25.680 [128/705] Linking static target lib/librte_net.a 00:01:25.680 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.680 [130/705] Linking target lib/librte_log.so.24.0 00:01:25.680 [131/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:25.680 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:25.680 [133/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:25.680 [134/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:25.941 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.941 [136/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.941 [137/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:25.941 [138/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:25.941 [139/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.941 [140/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:25.941 [141/705] Linking static target lib/librte_compressdev.a 00:01:25.941 [142/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:25.941 [143/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.941 [144/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.941 [145/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:25.941 [146/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:25.941 [147/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:25.941 [148/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:25.941 [149/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:25.941 [150/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:25.941 [151/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:25.941 [152/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:25.941 [153/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:25.941 [154/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:25.941 [155/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.941 [156/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:25.941 [157/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:25.941 [158/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:25.941 [159/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:25.941 [160/705] Linking static target lib/librte_timer.a 00:01:25.941 [161/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.941 [162/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:25.941 [163/705] Linking static target lib/librte_dispatcher.a 00:01:25.941 [164/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:25.941 [165/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:25.941 [166/705] Linking target lib/librte_kvargs.so.24.0 00:01:25.941 [167/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:25.941 [168/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:25.941 [169/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:25.941 [170/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:25.941 [171/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:25.941 [172/705] Linking static target lib/librte_bbdev.a 00:01:25.941 [173/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:25.941 [174/705] Linking static target lib/librte_gpudev.a 00:01:26.210 [175/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:26.210 [176/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:26.210 [177/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:26.210 [178/705] Linking static target lib/librte_gro.a 00:01:26.210 [179/705] Linking static target lib/librte_jobstats.a 00:01:26.210 [180/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:26.210 [181/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:26.210 [182/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:26.210 [183/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:26.210 [184/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.210 [185/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:26.210 [186/705] Linking static target lib/librte_mempool.a 00:01:26.210 [187/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:26.210 [188/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:26.210 [189/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:26.210 [190/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:26.210 [191/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:26.210 [192/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:26.210 [193/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:26.210 [194/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:26.210 [195/705] Linking static target lib/librte_dmadev.a 00:01:26.210 [196/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:26.210 [197/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:26.210 [198/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:26.210 [199/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:26.210 [200/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.210 [201/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:26.210 [202/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:26.210 [203/705] Linking static target lib/librte_distributor.a 00:01:26.210 [204/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:26.210 [205/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:26.210 [206/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:26.210 [207/705] Linking static target lib/librte_stack.a 00:01:26.210 [208/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:26.210 [209/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:26.210 [210/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:26.210 [211/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:26.210 [212/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:26.210 [213/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:26.210 [214/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:26.210 [215/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:26.210 [216/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:26.210 [217/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:26.210 [218/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:26.471 [219/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:26.471 [220/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:26.471 [221/705] Linking static target lib/librte_gso.a 00:01:26.471 [222/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:26.471 [223/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:26.471 [224/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:26.471 [225/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:26.471 [226/705] Linking static target lib/librte_latencystats.a 00:01:26.471 [227/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:26.471 [228/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:26.471 [229/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:26.471 [230/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:26.471 [231/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:26.471 [232/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:26.471 [233/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:26.471 [234/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:26.471 [235/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:26.471 [236/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:26.471 [237/705] Linking static target lib/librte_regexdev.a 00:01:26.471 [238/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:26.471 [239/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:26.471 [240/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.471 [241/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:26.471 [242/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:26.471 [243/705] Linking static target lib/librte_pcapng.a 00:01:26.471 [244/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:26.471 [245/705] Linking static target lib/librte_telemetry.a 00:01:26.471 [246/705] Linking static target lib/librte_rawdev.a 00:01:26.471 [247/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:26.471 [248/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:26.471 [249/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:26.471 [250/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:26.471 [251/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:26.471 [252/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:26.471 [253/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:26.471 [254/705] Linking static target lib/librte_mldev.a 00:01:26.471 [255/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:26.471 [256/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.471 [257/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:26.471 [258/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:26.471 [259/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:26.471 [260/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:26.471 [261/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.471 [262/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:26.471 [263/705] Linking static target lib/librte_ip_frag.a 00:01:26.471 [264/705] Linking static target lib/librte_rcu.a 00:01:26.471 [265/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:26.471 [266/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:26.471 [267/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.471 [268/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:26.471 [269/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:26.471 [270/705] Linking static target lib/librte_eal.a 00:01:26.471 [271/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:26.471 [272/705] Linking static target lib/librte_bpf.a 00:01:26.471 [273/705] Linking static target lib/librte_reorder.a 00:01:26.471 [274/705] Linking static target lib/librte_power.a 00:01:26.471 [275/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:26.471 [276/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.730 [277/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:26.730 [278/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.730 [279/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:26.730 [280/705] Linking static target lib/librte_security.a 00:01:26.730 [281/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:26.730 [282/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.730 [283/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:26.730 [284/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.730 [285/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:26.730 [286/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:26.730 [287/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:26.730 [288/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:26.730 [289/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.730 [290/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:26.730 [291/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:26.730 [292/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:26.730 [293/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:26.730 [294/705] Linking static target lib/librte_mbuf.a 00:01:26.730 [295/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.730 [296/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:26.730 [297/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:26.730 [298/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:26.731 [299/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:26.731 [300/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:26.731 [301/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:26.731 [302/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.731 [303/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:26.731 [304/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:26.731 [305/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:26.731 [306/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:26.731 [307/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:26.731 [308/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:26.731 [309/705] Linking static target lib/librte_rib.a 00:01:26.994 [310/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:26.994 [311/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:26.994 [312/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:26.994 [313/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:26.994 [314/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:26.994 [315/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:26.994 [316/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:26.994 [317/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:26.994 [318/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [319/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:26.994 [320/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:26.994 [321/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:26.994 [322/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:26.994 [323/705] Linking static target lib/librte_efd.a 00:01:26.994 [324/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:26.994 [325/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:26.994 [326/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [327/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:26.994 [328/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:26.994 [329/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:26.994 [330/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:26.994 [331/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:26.994 [332/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [333/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:26.994 [334/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:26.994 [335/705] Linking static target lib/librte_lpm.a 00:01:26.994 [336/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:26.994 [337/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:26.994 [338/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:26.994 [339/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [340/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:26.994 [341/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:26.994 [342/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:26.994 [343/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:26.994 [344/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [345/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [346/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:26.994 [347/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [348/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:26.994 [349/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.994 [350/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:26.994 [351/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:26.994 [352/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:26.994 [353/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:26.994 [354/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:27.256 [355/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:27.256 [356/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:27.256 [357/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:27.256 [358/705] Linking target lib/librte_telemetry.so.24.0 00:01:27.256 [359/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:27.256 [360/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.256 [361/705] Linking static target lib/librte_fib.a 00:01:27.256 [362/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:27.256 [363/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:27.256 [364/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:27.256 [365/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:27.256 [366/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.256 [367/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:27.256 [368/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:27.256 [369/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.256 [370/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:27.256 [371/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:27.256 [372/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:27.256 [373/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:27.256 [374/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:27.256 [375/705] Linking static target lib/librte_graph.a 00:01:27.256 [376/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:27.256 [377/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:27.256 [378/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.256 [379/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:27.256 [380/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:27.256 [381/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:27.256 [382/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:27.256 [383/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:27.256 [384/705] Linking static target lib/librte_pdump.a 00:01:27.256 [385/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:27.256 [386/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:27.256 [387/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:27.256 [388/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:27.256 [389/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:27.513 [390/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:27.513 [391/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:27.513 [392/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:27.513 [393/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:27.513 [394/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:27.513 [395/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:27.513 [396/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:27.513 [397/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:27.513 [398/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:27.513 [399/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.513 [400/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.513 [401/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:27.513 [402/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.513 [403/705] Linking static target drivers/librte_bus_vdev.a 00:01:27.513 [404/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.513 [405/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:27.513 [406/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:27.513 [407/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:27.513 [408/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:27.513 [409/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:27.513 [410/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:27.513 [411/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:27.513 [412/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.513 [413/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.513 [414/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:27.513 [415/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:27.513 [416/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:27.513 [417/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.513 [418/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:27.513 [419/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:27.513 [420/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.513 [421/705] Linking static target lib/librte_table.a 00:01:27.513 [422/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.513 [423/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:27.770 [424/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:27.771 [425/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.771 [426/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.771 [427/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:27.771 [428/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:27.771 [429/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:27.771 [430/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:27.771 [431/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.771 [432/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.771 [433/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.771 [434/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:27.771 [435/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:27.771 [436/705] Linking static target drivers/librte_bus_pci.a 00:01:27.771 [437/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.771 [438/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:27.771 [439/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:27.771 [440/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:27.771 [441/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:27.771 [442/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:27.771 [443/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:27.771 [444/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:27.771 [445/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:27.771 [446/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:27.771 [447/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:27.771 [448/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:27.771 [449/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.771 [450/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:27.771 [451/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:27.771 [452/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:27.771 [453/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:27.771 [454/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:27.771 [455/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:27.771 [456/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:27.771 [457/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:27.771 [458/705] Linking static target lib/librte_sched.a 00:01:27.771 [459/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:27.771 [460/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:27.771 [461/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:27.771 [462/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.771 [463/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.771 [464/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:27.771 [465/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:27.771 [466/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:27.771 [467/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:27.771 [468/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:27.771 [469/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:27.771 [470/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:27.771 [471/705] Linking static target lib/librte_cryptodev.a 00:01:27.771 [472/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:28.032 [473/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:28.032 [474/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:28.032 [475/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:28.032 [476/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:28.032 [477/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:28.032 [478/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:28.032 [479/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:28.032 [480/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:28.032 [481/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:28.032 [482/705] Linking static target lib/librte_node.a 00:01:28.032 [483/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:28.032 [484/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:28.032 [485/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:28.032 [486/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:28.032 [487/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:28.032 [488/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:28.032 [489/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:28.032 [490/705] Linking static target lib/librte_ipsec.a 00:01:28.032 [491/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.032 [492/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:28.032 [493/705] Linking static target lib/librte_pdcp.a 00:01:28.032 [494/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:28.032 [495/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:28.032 [496/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:28.032 [497/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:28.032 [498/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:28.032 [499/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.032 [500/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.032 [501/705] Linking static target drivers/librte_mempool_ring.a 00:01:28.032 [502/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:28.032 [503/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:28.032 [504/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:28.032 [505/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:28.032 [506/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:28.032 [507/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:28.032 [508/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:28.032 [509/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:28.032 [510/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:28.032 [511/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:28.032 [512/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:28.032 [513/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:28.293 [514/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:28.293 [515/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:28.293 [516/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:28.293 [517/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:28.293 [518/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:28.293 [519/705] Linking static target lib/librte_member.a 00:01:28.293 [520/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:28.293 [521/705] Linking static target lib/acl/libavx2_tmp.a 00:01:28.293 [522/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:28.293 [523/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:28.293 [524/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:28.293 [525/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:28.293 [526/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:28.293 [527/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.293 [528/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.293 [529/705] Linking static target lib/librte_port.a 00:01:28.293 [530/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:28.293 [531/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:28.293 [532/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:28.293 [533/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.293 [534/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:28.293 [535/705] Linking static target lib/librte_hash.a 00:01:28.293 [536/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:28.293 [537/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:28.293 [538/705] Linking static target lib/librte_eventdev.a 00:01:28.293 [539/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:28.293 [540/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:28.293 [541/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:28.293 [542/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.293 [543/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.293 [544/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:28.293 [545/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.555 [546/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.555 [547/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:28.555 [548/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:28.555 [549/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:28.555 [550/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:28.555 [551/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:28.555 [552/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:28.555 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:28.555 [554/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:28.555 [555/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.555 [556/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:28.555 [557/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:28.555 [558/705] Linking static target lib/librte_acl.a 00:01:28.555 [559/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:28.555 [560/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:28.555 [561/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:28.555 [562/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:28.555 [563/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:28.816 [564/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:28.816 [565/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:28.816 [566/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:29.076 [567/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.076 [568/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.076 [569/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:29.076 [570/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.076 [571/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:29.335 [572/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:29.595 [573/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:29.595 [574/705] Linking static target lib/librte_ethdev.a 00:01:29.595 [575/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:29.595 [576/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:29.595 [577/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:29.856 [578/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.117 [579/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:30.117 [580/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:30.378 [581/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:30.378 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:30.378 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:30.378 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:30.378 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:30.378 [586/705] Linking static target drivers/librte_net_i40e.a 00:01:31.382 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:31.643 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.905 [589/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.165 [590/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:36.370 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:36.370 [592/705] Linking static target lib/librte_pipeline.a 00:01:37.310 [593/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.310 [594/705] Linking target lib/librte_eal.so.24.0 00:01:37.310 [595/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:37.310 [596/705] Linking static target lib/librte_vhost.a 00:01:37.310 [597/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:37.311 [598/705] Linking target lib/librte_dmadev.so.24.0 00:01:37.571 [599/705] Linking target lib/librte_meter.so.24.0 00:01:37.571 [600/705] Linking target lib/librte_pci.so.24.0 00:01:37.571 [601/705] Linking target lib/librte_stack.so.24.0 00:01:37.571 [602/705] Linking target lib/librte_ring.so.24.0 00:01:37.571 [603/705] Linking target lib/librte_timer.so.24.0 00:01:37.571 [604/705] Linking target lib/librte_rawdev.so.24.0 00:01:37.571 [605/705] Linking target lib/librte_jobstats.so.24.0 00:01:37.571 [606/705] Linking target lib/librte_cfgfile.so.24.0 00:01:37.571 [607/705] Linking target drivers/librte_bus_vdev.so.24.0 00:01:37.571 [608/705] Linking target lib/librte_acl.so.24.0 00:01:37.571 [609/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.571 [610/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:37.571 [611/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:37.571 [612/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:37.571 [613/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:37.571 [614/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:37.571 [615/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:37.571 [616/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:37.571 [617/705] Linking target lib/librte_rcu.so.24.0 00:01:37.571 [618/705] Linking target lib/librte_mempool.so.24.0 00:01:37.571 [619/705] Linking target drivers/librte_bus_pci.so.24.0 00:01:37.571 [620/705] Linking target app/dpdk-proc-info 00:01:37.571 [621/705] Linking target app/dpdk-test-acl 00:01:37.571 [622/705] Linking target app/dpdk-test-compress-perf 00:01:37.571 [623/705] Linking target app/dpdk-test-cmdline 00:01:37.571 [624/705] Linking target app/dpdk-test-eventdev 00:01:37.833 [625/705] Linking target app/dpdk-test-regex 00:01:37.833 [626/705] Linking target app/dpdk-test-mldev 00:01:37.833 [627/705] Linking target app/dpdk-test-crypto-perf 00:01:37.833 [628/705] Linking target app/dpdk-testpmd 00:01:37.833 [629/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:37.833 [630/705] Linking target app/dpdk-pdump 00:01:37.833 [631/705] Linking target app/dpdk-dumpcap 00:01:37.833 [632/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:37.833 [633/705] Linking target app/dpdk-test-dma-perf 00:01:37.833 [634/705] Linking target app/dpdk-test-gpudev 00:01:37.833 [635/705] Linking target app/dpdk-test-bbdev 00:01:37.833 [636/705] Linking target app/dpdk-test-fib 00:01:37.833 [637/705] Linking target app/dpdk-test-pipeline 00:01:37.833 [638/705] Linking target app/dpdk-test-sad 00:01:37.833 [639/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:37.833 [640/705] Linking target app/dpdk-graph 00:01:37.833 [641/705] Linking target app/dpdk-test-flow-perf 00:01:37.833 [642/705] Linking target app/dpdk-test-security-perf 00:01:37.833 [643/705] Linking target drivers/librte_mempool_ring.so.24.0 00:01:37.833 [644/705] Linking target lib/librte_mbuf.so.24.0 00:01:37.833 [645/705] Linking target lib/librte_rib.so.24.0 00:01:37.833 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:38.094 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:38.094 [648/705] Linking target lib/librte_net.so.24.0 00:01:38.094 [649/705] Linking target lib/librte_compressdev.so.24.0 00:01:38.094 [650/705] Linking target lib/librte_bbdev.so.24.0 00:01:38.094 [651/705] Linking target lib/librte_regexdev.so.24.0 00:01:38.094 [652/705] Linking target lib/librte_distributor.so.24.0 00:01:38.094 [653/705] Linking target lib/librte_gpudev.so.24.0 00:01:38.094 [654/705] Linking target lib/librte_reorder.so.24.0 00:01:38.094 [655/705] Linking target lib/librte_mldev.so.24.0 00:01:38.094 [656/705] Linking target lib/librte_cryptodev.so.24.0 00:01:38.094 [657/705] Linking target lib/librte_sched.so.24.0 00:01:38.094 [658/705] Linking target lib/librte_fib.so.24.0 00:01:38.094 [659/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:38.094 [660/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:38.094 [661/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:38.094 [662/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:38.094 [663/705] Linking target lib/librte_security.so.24.0 00:01:38.094 [664/705] Linking target lib/librte_hash.so.24.0 00:01:38.094 [665/705] Linking target lib/librte_cmdline.so.24.0 00:01:38.355 [666/705] Linking target lib/librte_ethdev.so.24.0 00:01:38.355 [667/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:38.355 [668/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:38.355 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:38.355 [670/705] Linking target lib/librte_pdcp.so.24.0 00:01:38.355 [671/705] Linking target lib/librte_efd.so.24.0 00:01:38.355 [672/705] Linking target lib/librte_lpm.so.24.0 00:01:38.355 [673/705] Linking target lib/librte_member.so.24.0 00:01:38.355 [674/705] Linking target lib/librte_ipsec.so.24.0 00:01:38.355 [675/705] Linking target lib/librte_gso.so.24.0 00:01:38.355 [676/705] Linking target lib/librte_metrics.so.24.0 00:01:38.355 [677/705] Linking target lib/librte_pcapng.so.24.0 00:01:38.355 [678/705] Linking target lib/librte_gro.so.24.0 00:01:38.355 [679/705] Linking target lib/librte_bpf.so.24.0 00:01:38.355 [680/705] Linking target lib/librte_ip_frag.so.24.0 00:01:38.355 [681/705] Linking target lib/librte_power.so.24.0 00:01:38.355 [682/705] Linking target lib/librte_eventdev.so.24.0 00:01:38.355 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:01:38.617 [684/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:38.617 [685/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:38.617 [686/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:38.617 [687/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:38.617 [688/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:38.617 [689/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:38.617 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:38.617 [691/705] Linking target lib/librte_bitratestats.so.24.0 00:01:38.617 [692/705] Linking target lib/librte_latencystats.so.24.0 00:01:38.617 [693/705] Linking target lib/librte_pdump.so.24.0 00:01:38.617 [694/705] Linking target lib/librte_graph.so.24.0 00:01:38.617 [695/705] Linking target lib/librte_dispatcher.so.24.0 00:01:38.617 [696/705] Linking target lib/librte_port.so.24.0 00:01:38.879 [697/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:38.879 [698/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:38.879 [699/705] Linking target lib/librte_node.so.24.0 00:01:38.879 [700/705] Linking target lib/librte_table.so.24.0 00:01:38.879 [701/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:39.451 [702/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.451 [703/705] Linking target lib/librte_vhost.so.24.0 00:01:41.366 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.366 [705/705] Linking target lib/librte_pipeline.so.24.0 00:01:41.366 11:55:54 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:41.366 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:41.366 [0/1] Installing files. 00:01:41.631 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:41.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:41.633 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:41.634 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:41.635 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:41.636 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:41.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:41.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:41.637 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:41.637 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.637 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:41.903 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:41.903 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:41.903 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:41.903 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:41.903 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.903 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.904 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.905 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.906 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:41.907 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:41.907 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:41.907 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:41.907 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:41.907 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:41.908 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:41.908 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:41.908 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:41.908 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:41.908 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:41.908 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:41.908 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:41.908 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:41.908 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:41.908 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:41.908 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:41.908 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:41.908 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:41.908 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:41.908 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:41.908 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:41.908 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:41.908 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:41.908 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:41.908 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:41.908 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:41.908 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:41.908 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:41.908 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:41.908 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:41.908 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:41.908 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:41.908 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:41.908 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:41.908 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:41.908 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:41.908 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:41.908 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:41.908 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:41.908 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:41.908 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:41.908 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:41.908 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:41.908 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:41.908 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:41.908 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:41.908 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:41.908 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:41.908 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:41.908 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:41.908 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:41.908 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:41.908 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:41.908 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:41.908 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:41.908 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:41.908 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:41.908 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:41.908 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:41.908 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:41.908 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:41.908 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:41.908 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:41.908 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:41.908 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:41.908 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:41.908 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:41.908 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:41.908 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:41.908 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:41.908 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:41.908 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:41.908 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:41.909 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:41.909 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:41.909 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:41.909 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:41.909 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:41.909 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:41.909 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:41.909 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:41.909 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:41.909 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:41.909 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:41.909 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:41.909 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:41.909 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:41.909 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:41.909 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:41.909 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:41.909 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:41.909 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:41.909 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:41.909 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:41.909 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:41.909 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:41.909 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:41.909 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:41.909 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:41.909 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:41.909 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:41.909 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:41.909 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:41.909 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:41.909 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:41.909 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:41.909 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:41.909 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:41.909 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:41.909 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:41.909 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:41.909 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:41.909 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:41.909 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:41.909 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:41.909 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:41.909 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:41.909 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:41.909 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:41.909 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:41.909 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:41.909 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:41.909 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:41.909 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:41.909 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:41.909 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:41.909 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:41.909 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:41.909 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:41.909 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:41.909 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:41.909 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:41.909 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:41.909 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:41.909 11:55:54 -- common/autobuild_common.sh@189 -- $ uname -s 00:01:41.909 11:55:54 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:41.909 11:55:54 -- common/autobuild_common.sh@200 -- $ cat 00:01:41.909 11:55:54 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.909 00:01:41.909 real 0m23.631s 00:01:41.909 user 7m3.706s 00:01:41.909 sys 2m44.464s 00:01:41.909 11:55:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.909 11:55:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.909 ************************************ 00:01:41.909 END TEST build_native_dpdk 00:01:41.909 ************************************ 00:01:41.909 11:55:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.909 11:55:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.909 11:55:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.909 11:55:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.909 11:55:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.909 11:55:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.909 11:55:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.909 11:55:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:42.171 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:42.171 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.171 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.432 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:42.692 Using 'verbs' RDMA provider 00:01:58.171 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:10.400 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:10.400 Creating mk/config.mk...done. 00:02:10.400 Creating mk/cc.flags.mk...done. 00:02:10.400 Type 'make' to build. 00:02:10.400 11:56:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:10.400 11:56:21 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:10.400 11:56:21 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:10.400 11:56:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.400 ************************************ 00:02:10.400 START TEST make 00:02:10.400 ************************************ 00:02:10.400 11:56:21 -- common/autotest_common.sh@1104 -- $ make -j144 00:02:10.400 make[1]: Nothing to be done for 'all'. 00:02:10.660 The Meson build system 00:02:10.660 Version: 1.3.1 00:02:10.660 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:10.660 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:10.660 Build type: native build 00:02:10.660 Project name: libvfio-user 00:02:10.660 Project version: 0.0.1 00:02:10.660 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:10.660 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:10.660 Host machine cpu family: x86_64 00:02:10.660 Host machine cpu: x86_64 00:02:10.660 Run-time dependency threads found: YES 00:02:10.660 Library dl found: YES 00:02:10.660 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:10.660 Run-time dependency json-c found: YES 0.17 00:02:10.660 Run-time dependency cmocka found: YES 1.1.7 00:02:10.660 Program pytest-3 found: NO 00:02:10.660 Program flake8 found: NO 00:02:10.660 Program misspell-fixer found: NO 00:02:10.660 Program restructuredtext-lint found: NO 00:02:10.660 Program valgrind found: YES (/usr/bin/valgrind) 00:02:10.660 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.660 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.660 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.660 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:10.660 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:10.660 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:10.660 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:10.660 Build targets in project: 8 00:02:10.660 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:10.660 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:10.660 00:02:10.660 libvfio-user 0.0.1 00:02:10.660 00:02:10.660 User defined options 00:02:10.660 buildtype : debug 00:02:10.660 default_library: shared 00:02:10.660 libdir : /usr/local/lib 00:02:10.660 00:02:10.660 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.919 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:10.919 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:10.919 [2/37] Compiling C object samples/null.p/null.c.o 00:02:10.919 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:10.919 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:10.919 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:10.919 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:10.919 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:10.919 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:10.919 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:10.919 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:10.919 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:10.919 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:10.919 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:10.919 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:10.919 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:10.919 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:10.919 [17/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:10.919 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:11.177 [19/37] Compiling C object samples/server.p/server.c.o 00:02:11.177 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:11.177 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:11.178 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:11.178 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:11.178 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:11.178 [25/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:11.178 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:11.178 [27/37] Compiling C object samples/client.p/client.c.o 00:02:11.178 [28/37] Linking target samples/client 00:02:11.178 [29/37] Linking target test/unit_tests 00:02:11.178 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:11.178 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:11.438 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:11.438 [33/37] Linking target samples/null 00:02:11.438 [34/37] Linking target samples/server 00:02:11.438 [35/37] Linking target samples/gpio-pci-idio-16 00:02:11.438 [36/37] Linking target samples/lspci 00:02:11.438 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:11.438 INFO: autodetecting backend as ninja 00:02:11.438 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.438 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.699 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:11.699 ninja: no work to do. 00:02:19.831 CC lib/ut_mock/mock.o 00:02:19.831 CC lib/log/log.o 00:02:19.831 CC lib/log/log_flags.o 00:02:19.831 CC lib/log/log_deprecated.o 00:02:19.831 CC lib/ut/ut.o 00:02:19.831 LIB libspdk_ut_mock.a 00:02:19.831 LIB libspdk_log.a 00:02:19.831 SO libspdk_ut_mock.so.5.0 00:02:19.831 LIB libspdk_ut.a 00:02:19.831 SO libspdk_log.so.6.1 00:02:19.831 SO libspdk_ut.so.1.0 00:02:19.831 SYMLINK libspdk_ut_mock.so 00:02:19.831 SYMLINK libspdk_log.so 00:02:19.831 SYMLINK libspdk_ut.so 00:02:19.831 CC lib/util/base64.o 00:02:19.831 CC lib/util/cpuset.o 00:02:19.831 CC lib/dma/dma.o 00:02:19.831 CC lib/util/bit_array.o 00:02:19.831 CC lib/util/crc16.o 00:02:19.831 CC lib/util/crc32.o 00:02:19.831 CC lib/util/crc32c.o 00:02:19.831 CC lib/util/crc32_ieee.o 00:02:19.831 CC lib/util/crc64.o 00:02:19.831 CXX lib/trace_parser/trace.o 00:02:19.831 CC lib/util/dif.o 00:02:19.831 CC lib/util/fd.o 00:02:19.831 CC lib/ioat/ioat.o 00:02:19.831 CC lib/util/file.o 00:02:19.831 CC lib/util/hexlify.o 00:02:19.831 CC lib/util/iov.o 00:02:19.831 CC lib/util/math.o 00:02:19.831 CC lib/util/pipe.o 00:02:19.831 CC lib/util/strerror_tls.o 00:02:19.831 CC lib/util/string.o 00:02:19.831 CC lib/util/uuid.o 00:02:19.831 CC lib/util/fd_group.o 00:02:19.831 CC lib/util/zipf.o 00:02:19.831 CC lib/util/xor.o 00:02:20.091 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.091 CC lib/vfio_user/host/vfio_user.o 00:02:20.091 LIB libspdk_dma.a 00:02:20.091 SO libspdk_dma.so.3.0 00:02:20.350 LIB libspdk_ioat.a 00:02:20.350 SYMLINK libspdk_dma.so 00:02:20.350 SO libspdk_ioat.so.6.0 00:02:20.350 LIB libspdk_vfio_user.a 00:02:20.350 SYMLINK libspdk_ioat.so 00:02:20.350 SO libspdk_vfio_user.so.4.0 00:02:20.350 LIB libspdk_util.a 00:02:20.350 SYMLINK libspdk_vfio_user.so 00:02:20.610 SO libspdk_util.so.8.0 00:02:20.610 SYMLINK libspdk_util.so 00:02:20.610 LIB libspdk_trace_parser.a 00:02:20.870 SO libspdk_trace_parser.so.4.0 00:02:20.870 CC lib/conf/conf.o 00:02:20.870 CC lib/rdma/common.o 00:02:20.870 CC lib/rdma/rdma_verbs.o 00:02:20.870 CC lib/json/json_parse.o 00:02:20.870 CC lib/json/json_util.o 00:02:20.870 CC lib/json/json_write.o 00:02:20.870 SYMLINK libspdk_trace_parser.so 00:02:20.870 CC lib/vmd/vmd.o 00:02:20.870 CC lib/idxd/idxd.o 00:02:20.870 CC lib/vmd/led.o 00:02:20.870 CC lib/idxd/idxd_user.o 00:02:20.870 CC lib/idxd/idxd_kernel.o 00:02:20.870 CC lib/env_dpdk/env.o 00:02:20.870 CC lib/env_dpdk/memory.o 00:02:20.870 CC lib/env_dpdk/pci.o 00:02:20.870 CC lib/env_dpdk/init.o 00:02:20.870 CC lib/env_dpdk/threads.o 00:02:20.870 CC lib/env_dpdk/pci_ioat.o 00:02:20.870 CC lib/env_dpdk/pci_vmd.o 00:02:20.870 CC lib/env_dpdk/pci_virtio.o 00:02:20.870 CC lib/env_dpdk/pci_idxd.o 00:02:20.870 CC lib/env_dpdk/pci_event.o 00:02:20.870 CC lib/env_dpdk/sigbus_handler.o 00:02:20.870 CC lib/env_dpdk/pci_dpdk.o 00:02:20.870 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:20.870 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.131 LIB libspdk_conf.a 00:02:21.131 SO libspdk_conf.so.5.0 00:02:21.131 LIB libspdk_rdma.a 00:02:21.131 SYMLINK libspdk_conf.so 00:02:21.131 LIB libspdk_json.a 00:02:21.131 SO libspdk_rdma.so.5.0 00:02:21.131 SO libspdk_json.so.5.1 00:02:21.392 SYMLINK libspdk_rdma.so 00:02:21.392 SYMLINK libspdk_json.so 00:02:21.392 LIB libspdk_idxd.a 00:02:21.392 SO libspdk_idxd.so.11.0 00:02:21.392 LIB libspdk_vmd.a 00:02:21.392 SO libspdk_vmd.so.5.0 00:02:21.392 SYMLINK libspdk_idxd.so 00:02:21.392 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.392 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.392 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.392 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.653 SYMLINK libspdk_vmd.so 00:02:21.653 LIB libspdk_jsonrpc.a 00:02:21.914 SO libspdk_jsonrpc.so.5.1 00:02:21.914 SYMLINK libspdk_jsonrpc.so 00:02:22.175 LIB libspdk_env_dpdk.a 00:02:22.175 CC lib/rpc/rpc.o 00:02:22.175 SO libspdk_env_dpdk.so.13.0 00:02:22.175 SYMLINK libspdk_env_dpdk.so 00:02:22.437 LIB libspdk_rpc.a 00:02:22.437 SO libspdk_rpc.so.5.0 00:02:22.437 SYMLINK libspdk_rpc.so 00:02:22.699 CC lib/notify/notify.o 00:02:22.699 CC lib/notify/notify_rpc.o 00:02:22.699 CC lib/trace/trace.o 00:02:22.699 CC lib/trace/trace_flags.o 00:02:22.699 CC lib/trace/trace_rpc.o 00:02:22.699 CC lib/sock/sock.o 00:02:22.699 CC lib/sock/sock_rpc.o 00:02:22.699 LIB libspdk_notify.a 00:02:22.960 SO libspdk_notify.so.5.0 00:02:22.960 LIB libspdk_trace.a 00:02:22.960 SO libspdk_trace.so.9.0 00:02:22.960 SYMLINK libspdk_notify.so 00:02:22.960 SYMLINK libspdk_trace.so 00:02:22.960 LIB libspdk_sock.a 00:02:22.960 SO libspdk_sock.so.8.0 00:02:23.223 SYMLINK libspdk_sock.so 00:02:23.223 CC lib/thread/thread.o 00:02:23.223 CC lib/thread/iobuf.o 00:02:23.484 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:23.484 CC lib/nvme/nvme_ctrlr.o 00:02:23.484 CC lib/nvme/nvme_fabric.o 00:02:23.484 CC lib/nvme/nvme_ns_cmd.o 00:02:23.484 CC lib/nvme/nvme_ns.o 00:02:23.484 CC lib/nvme/nvme_pcie_common.o 00:02:23.484 CC lib/nvme/nvme_pcie.o 00:02:23.484 CC lib/nvme/nvme_qpair.o 00:02:23.484 CC lib/nvme/nvme.o 00:02:23.484 CC lib/nvme/nvme_quirks.o 00:02:23.484 CC lib/nvme/nvme_transport.o 00:02:23.484 CC lib/nvme/nvme_discovery.o 00:02:23.484 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:23.484 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:23.484 CC lib/nvme/nvme_io_msg.o 00:02:23.484 CC lib/nvme/nvme_tcp.o 00:02:23.484 CC lib/nvme/nvme_opal.o 00:02:23.484 CC lib/nvme/nvme_poll_group.o 00:02:23.484 CC lib/nvme/nvme_zns.o 00:02:23.484 CC lib/nvme/nvme_cuse.o 00:02:23.484 CC lib/nvme/nvme_rdma.o 00:02:23.484 CC lib/nvme/nvme_vfio_user.o 00:02:24.427 LIB libspdk_thread.a 00:02:24.427 SO libspdk_thread.so.9.0 00:02:24.689 SYMLINK libspdk_thread.so 00:02:24.950 CC lib/blob/blobstore.o 00:02:24.950 CC lib/accel/accel.o 00:02:24.950 CC lib/blob/request.o 00:02:24.950 CC lib/blob/zeroes.o 00:02:24.950 CC lib/accel/accel_rpc.o 00:02:24.950 CC lib/blob/blob_bs_dev.o 00:02:24.950 CC lib/accel/accel_sw.o 00:02:24.950 CC lib/virtio/virtio.o 00:02:24.950 CC lib/virtio/virtio_vhost_user.o 00:02:24.950 CC lib/virtio/virtio_vfio_user.o 00:02:24.950 CC lib/virtio/virtio_pci.o 00:02:24.950 CC lib/vfu_tgt/tgt_endpoint.o 00:02:24.950 CC lib/init/json_config.o 00:02:24.950 CC lib/vfu_tgt/tgt_rpc.o 00:02:24.950 CC lib/init/subsystem.o 00:02:24.950 CC lib/init/subsystem_rpc.o 00:02:24.950 CC lib/init/rpc.o 00:02:24.950 LIB libspdk_init.a 00:02:25.211 LIB libspdk_nvme.a 00:02:25.211 SO libspdk_init.so.4.0 00:02:25.211 LIB libspdk_virtio.a 00:02:25.211 LIB libspdk_vfu_tgt.a 00:02:25.211 SO libspdk_virtio.so.6.0 00:02:25.211 SO libspdk_vfu_tgt.so.2.0 00:02:25.211 SYMLINK libspdk_init.so 00:02:25.211 SO libspdk_nvme.so.12.0 00:02:25.211 SYMLINK libspdk_virtio.so 00:02:25.211 SYMLINK libspdk_vfu_tgt.so 00:02:25.473 CC lib/event/app.o 00:02:25.473 CC lib/event/reactor.o 00:02:25.473 CC lib/event/log_rpc.o 00:02:25.473 CC lib/event/app_rpc.o 00:02:25.473 CC lib/event/scheduler_static.o 00:02:25.473 SYMLINK libspdk_nvme.so 00:02:25.734 LIB libspdk_accel.a 00:02:25.734 SO libspdk_accel.so.14.0 00:02:25.734 LIB libspdk_event.a 00:02:25.734 SYMLINK libspdk_accel.so 00:02:25.734 SO libspdk_event.so.12.0 00:02:25.995 SYMLINK libspdk_event.so 00:02:25.995 CC lib/bdev/bdev.o 00:02:25.995 CC lib/bdev/bdev_rpc.o 00:02:25.995 CC lib/bdev/bdev_zone.o 00:02:25.995 CC lib/bdev/part.o 00:02:25.995 CC lib/bdev/scsi_nvme.o 00:02:26.569 LIB libspdk_blob.a 00:02:26.569 SO libspdk_blob.so.10.1 00:02:26.569 SYMLINK libspdk_blob.so 00:02:26.831 CC lib/blobfs/blobfs.o 00:02:26.831 CC lib/blobfs/tree.o 00:02:26.831 CC lib/lvol/lvol.o 00:02:27.403 LIB libspdk_blobfs.a 00:02:27.665 SO libspdk_blobfs.so.9.0 00:02:27.665 LIB libspdk_lvol.a 00:02:27.665 SO libspdk_lvol.so.9.1 00:02:27.665 SYMLINK libspdk_blobfs.so 00:02:27.665 SYMLINK libspdk_lvol.so 00:02:28.238 LIB libspdk_bdev.a 00:02:28.238 SO libspdk_bdev.so.14.0 00:02:28.238 SYMLINK libspdk_bdev.so 00:02:28.499 CC lib/ftl/ftl_core.o 00:02:28.499 CC lib/nvmf/ctrlr.o 00:02:28.499 CC lib/scsi/dev.o 00:02:28.499 CC lib/scsi/lun.o 00:02:28.499 CC lib/scsi/scsi_bdev.o 00:02:28.499 CC lib/scsi/port.o 00:02:28.499 CC lib/ftl/ftl_layout.o 00:02:28.499 CC lib/nvmf/ctrlr_bdev.o 00:02:28.499 CC lib/ftl/ftl_init.o 00:02:28.499 CC lib/nvmf/ctrlr_discovery.o 00:02:28.499 CC lib/nvmf/subsystem.o 00:02:28.499 CC lib/scsi/scsi.o 00:02:28.499 CC lib/ftl/ftl_debug.o 00:02:28.499 CC lib/ftl/ftl_sb.o 00:02:28.499 CC lib/nvmf/nvmf.o 00:02:28.499 CC lib/ftl/ftl_io.o 00:02:28.499 CC lib/scsi/scsi_pr.o 00:02:28.499 CC lib/nbd/nbd.o 00:02:28.499 CC lib/nvmf/nvmf_rpc.o 00:02:28.499 CC lib/ftl/ftl_l2p.o 00:02:28.499 CC lib/scsi/scsi_rpc.o 00:02:28.499 CC lib/ublk/ublk.o 00:02:28.499 CC lib/nvmf/transport.o 00:02:28.499 CC lib/nbd/nbd_rpc.o 00:02:28.499 CC lib/scsi/task.o 00:02:28.499 CC lib/ftl/ftl_l2p_flat.o 00:02:28.499 CC lib/nvmf/tcp.o 00:02:28.499 CC lib/ublk/ublk_rpc.o 00:02:28.499 CC lib/ftl/ftl_nv_cache.o 00:02:28.499 CC lib/nvmf/vfio_user.o 00:02:28.499 CC lib/ftl/ftl_band.o 00:02:28.499 CC lib/nvmf/rdma.o 00:02:28.499 CC lib/ftl/ftl_band_ops.o 00:02:28.499 CC lib/ftl/ftl_writer.o 00:02:28.499 CC lib/ftl/ftl_rq.o 00:02:28.499 CC lib/ftl/ftl_reloc.o 00:02:28.499 CC lib/ftl/ftl_l2p_cache.o 00:02:28.499 CC lib/ftl/ftl_p2l.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:28.499 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:28.499 CC lib/ftl/utils/ftl_conf.o 00:02:28.499 CC lib/ftl/utils/ftl_md.o 00:02:28.499 CC lib/ftl/utils/ftl_mempool.o 00:02:28.499 CC lib/ftl/utils/ftl_bitmap.o 00:02:28.499 CC lib/ftl/utils/ftl_property.o 00:02:28.759 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.759 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.759 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.759 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.759 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.759 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.759 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.759 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.759 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.759 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.759 CC lib/ftl/base/ftl_base_dev.o 00:02:28.759 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.759 CC lib/ftl/ftl_trace.o 00:02:29.043 LIB libspdk_nbd.a 00:02:29.043 SO libspdk_nbd.so.6.0 00:02:29.043 LIB libspdk_scsi.a 00:02:29.368 SYMLINK libspdk_nbd.so 00:02:29.368 SO libspdk_scsi.so.8.0 00:02:29.368 LIB libspdk_ublk.a 00:02:29.368 SYMLINK libspdk_scsi.so 00:02:29.368 SO libspdk_ublk.so.2.0 00:02:29.368 SYMLINK libspdk_ublk.so 00:02:29.368 LIB libspdk_ftl.a 00:02:29.654 CC lib/vhost/vhost.o 00:02:29.654 CC lib/vhost/vhost_rpc.o 00:02:29.654 CC lib/vhost/vhost_scsi.o 00:02:29.654 CC lib/vhost/vhost_blk.o 00:02:29.654 CC lib/vhost/rte_vhost_user.o 00:02:29.654 CC lib/iscsi/conn.o 00:02:29.654 CC lib/iscsi/md5.o 00:02:29.654 CC lib/iscsi/init_grp.o 00:02:29.654 CC lib/iscsi/iscsi.o 00:02:29.654 CC lib/iscsi/param.o 00:02:29.654 CC lib/iscsi/portal_grp.o 00:02:29.654 CC lib/iscsi/tgt_node.o 00:02:29.654 CC lib/iscsi/iscsi_subsystem.o 00:02:29.654 CC lib/iscsi/iscsi_rpc.o 00:02:29.654 CC lib/iscsi/task.o 00:02:29.654 SO libspdk_ftl.so.8.0 00:02:29.916 SYMLINK libspdk_ftl.so 00:02:30.490 LIB libspdk_nvmf.a 00:02:30.490 LIB libspdk_vhost.a 00:02:30.490 SO libspdk_nvmf.so.17.0 00:02:30.490 SO libspdk_vhost.so.7.1 00:02:30.490 SYMLINK libspdk_vhost.so 00:02:30.752 SYMLINK libspdk_nvmf.so 00:02:30.752 LIB libspdk_iscsi.a 00:02:30.752 SO libspdk_iscsi.so.7.0 00:02:30.752 SYMLINK libspdk_iscsi.so 00:02:31.326 CC module/env_dpdk/env_dpdk_rpc.o 00:02:31.326 CC module/vfu_device/vfu_virtio.o 00:02:31.327 CC module/vfu_device/vfu_virtio_blk.o 00:02:31.327 CC module/vfu_device/vfu_virtio_scsi.o 00:02:31.327 CC module/vfu_device/vfu_virtio_rpc.o 00:02:31.327 CC module/accel/ioat/accel_ioat.o 00:02:31.327 CC module/accel/ioat/accel_ioat_rpc.o 00:02:31.327 CC module/blob/bdev/blob_bdev.o 00:02:31.327 CC module/accel/dsa/accel_dsa.o 00:02:31.327 CC module/accel/iaa/accel_iaa.o 00:02:31.327 CC module/accel/dsa/accel_dsa_rpc.o 00:02:31.327 CC module/accel/iaa/accel_iaa_rpc.o 00:02:31.327 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:31.327 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:31.327 CC module/sock/posix/posix.o 00:02:31.327 CC module/accel/error/accel_error.o 00:02:31.327 CC module/accel/error/accel_error_rpc.o 00:02:31.327 CC module/scheduler/gscheduler/gscheduler.o 00:02:31.327 LIB libspdk_env_dpdk_rpc.a 00:02:31.327 SO libspdk_env_dpdk_rpc.so.5.0 00:02:31.588 SYMLINK libspdk_env_dpdk_rpc.so 00:02:31.588 LIB libspdk_scheduler_dpdk_governor.a 00:02:31.588 LIB libspdk_scheduler_gscheduler.a 00:02:31.588 LIB libspdk_accel_ioat.a 00:02:31.588 LIB libspdk_accel_error.a 00:02:31.588 LIB libspdk_accel_iaa.a 00:02:31.588 SO libspdk_scheduler_gscheduler.so.3.0 00:02:31.589 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:31.589 LIB libspdk_scheduler_dynamic.a 00:02:31.589 LIB libspdk_accel_dsa.a 00:02:31.589 LIB libspdk_blob_bdev.a 00:02:31.589 SO libspdk_accel_ioat.so.5.0 00:02:31.589 SO libspdk_accel_error.so.1.0 00:02:31.589 SO libspdk_accel_iaa.so.2.0 00:02:31.589 SO libspdk_scheduler_dynamic.so.3.0 00:02:31.589 SO libspdk_blob_bdev.so.10.1 00:02:31.589 SO libspdk_accel_dsa.so.4.0 00:02:31.589 SYMLINK libspdk_scheduler_gscheduler.so 00:02:31.589 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:31.589 SYMLINK libspdk_accel_ioat.so 00:02:31.589 SYMLINK libspdk_accel_error.so 00:02:31.589 SYMLINK libspdk_scheduler_dynamic.so 00:02:31.589 SYMLINK libspdk_accel_iaa.so 00:02:31.589 SYMLINK libspdk_blob_bdev.so 00:02:31.589 SYMLINK libspdk_accel_dsa.so 00:02:31.850 LIB libspdk_vfu_device.a 00:02:31.850 SO libspdk_vfu_device.so.2.0 00:02:31.850 SYMLINK libspdk_vfu_device.so 00:02:32.112 LIB libspdk_sock_posix.a 00:02:32.112 SO libspdk_sock_posix.so.5.0 00:02:32.112 CC module/blobfs/bdev/blobfs_bdev.o 00:02:32.112 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:32.112 CC module/bdev/error/vbdev_error.o 00:02:32.112 CC module/bdev/aio/bdev_aio_rpc.o 00:02:32.112 CC module/bdev/aio/bdev_aio.o 00:02:32.112 CC module/bdev/error/vbdev_error_rpc.o 00:02:32.112 CC module/bdev/iscsi/bdev_iscsi.o 00:02:32.112 CC module/bdev/lvol/vbdev_lvol.o 00:02:32.112 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:32.112 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:32.112 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:32.112 CC module/bdev/delay/vbdev_delay.o 00:02:32.112 CC module/bdev/gpt/gpt.o 00:02:32.112 CC module/bdev/split/vbdev_split.o 00:02:32.112 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:32.112 CC module/bdev/malloc/bdev_malloc.o 00:02:32.112 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:32.112 CC module/bdev/gpt/vbdev_gpt.o 00:02:32.112 CC module/bdev/split/vbdev_split_rpc.o 00:02:32.112 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:32.112 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:32.112 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:32.112 CC module/bdev/passthru/vbdev_passthru.o 00:02:32.112 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:32.112 CC module/bdev/null/bdev_null.o 00:02:32.112 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:32.112 CC module/bdev/nvme/bdev_nvme.o 00:02:32.112 CC module/bdev/null/bdev_null_rpc.o 00:02:32.112 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:32.112 CC module/bdev/nvme/nvme_rpc.o 00:02:32.112 CC module/bdev/nvme/bdev_mdns_client.o 00:02:32.112 CC module/bdev/raid/bdev_raid.o 00:02:32.112 CC module/bdev/raid/bdev_raid_sb.o 00:02:32.112 CC module/bdev/nvme/vbdev_opal.o 00:02:32.112 CC module/bdev/raid/bdev_raid_rpc.o 00:02:32.112 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:32.112 CC module/bdev/raid/raid0.o 00:02:32.112 CC module/bdev/raid/raid1.o 00:02:32.112 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:32.112 CC module/bdev/ftl/bdev_ftl.o 00:02:32.112 CC module/bdev/raid/concat.o 00:02:32.112 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:32.112 SYMLINK libspdk_sock_posix.so 00:02:32.374 LIB libspdk_blobfs_bdev.a 00:02:32.374 SO libspdk_blobfs_bdev.so.5.0 00:02:32.374 LIB libspdk_bdev_split.a 00:02:32.374 SO libspdk_bdev_split.so.5.0 00:02:32.374 LIB libspdk_bdev_error.a 00:02:32.374 LIB libspdk_bdev_null.a 00:02:32.374 SYMLINK libspdk_blobfs_bdev.so 00:02:32.374 LIB libspdk_bdev_aio.a 00:02:32.374 SO libspdk_bdev_error.so.5.0 00:02:32.374 LIB libspdk_bdev_gpt.a 00:02:32.374 LIB libspdk_bdev_passthru.a 00:02:32.374 SO libspdk_bdev_null.so.5.0 00:02:32.374 LIB libspdk_bdev_ftl.a 00:02:32.374 SO libspdk_bdev_aio.so.5.0 00:02:32.374 SYMLINK libspdk_bdev_split.so 00:02:32.374 SO libspdk_bdev_passthru.so.5.0 00:02:32.374 LIB libspdk_bdev_iscsi.a 00:02:32.635 SO libspdk_bdev_gpt.so.5.0 00:02:32.635 SO libspdk_bdev_ftl.so.5.0 00:02:32.635 LIB libspdk_bdev_zone_block.a 00:02:32.635 SYMLINK libspdk_bdev_error.so 00:02:32.635 LIB libspdk_bdev_malloc.a 00:02:32.635 SYMLINK libspdk_bdev_null.so 00:02:32.635 LIB libspdk_bdev_delay.a 00:02:32.635 SO libspdk_bdev_iscsi.so.5.0 00:02:32.635 SYMLINK libspdk_bdev_aio.so 00:02:32.635 SO libspdk_bdev_zone_block.so.5.0 00:02:32.635 SO libspdk_bdev_malloc.so.5.0 00:02:32.635 SYMLINK libspdk_bdev_passthru.so 00:02:32.635 SYMLINK libspdk_bdev_gpt.so 00:02:32.635 SYMLINK libspdk_bdev_ftl.so 00:02:32.635 SO libspdk_bdev_delay.so.5.0 00:02:32.635 SYMLINK libspdk_bdev_iscsi.so 00:02:32.635 LIB libspdk_bdev_lvol.a 00:02:32.635 SYMLINK libspdk_bdev_zone_block.so 00:02:32.635 SYMLINK libspdk_bdev_malloc.so 00:02:32.635 LIB libspdk_bdev_virtio.a 00:02:32.635 SYMLINK libspdk_bdev_delay.so 00:02:32.635 SO libspdk_bdev_lvol.so.5.0 00:02:32.635 SO libspdk_bdev_virtio.so.5.0 00:02:32.635 SYMLINK libspdk_bdev_lvol.so 00:02:32.897 SYMLINK libspdk_bdev_virtio.so 00:02:32.897 LIB libspdk_bdev_raid.a 00:02:32.897 SO libspdk_bdev_raid.so.5.0 00:02:33.158 SYMLINK libspdk_bdev_raid.so 00:02:34.102 LIB libspdk_bdev_nvme.a 00:02:34.102 SO libspdk_bdev_nvme.so.6.0 00:02:34.102 SYMLINK libspdk_bdev_nvme.so 00:02:34.674 CC module/event/subsystems/vmd/vmd.o 00:02:34.674 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:34.674 CC module/event/subsystems/iobuf/iobuf.o 00:02:34.674 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:34.674 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:34.674 CC module/event/subsystems/scheduler/scheduler.o 00:02:34.674 CC module/event/subsystems/sock/sock.o 00:02:34.674 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:34.674 LIB libspdk_event_scheduler.a 00:02:34.674 LIB libspdk_event_vfu_tgt.a 00:02:34.674 LIB libspdk_event_vmd.a 00:02:34.674 LIB libspdk_event_sock.a 00:02:34.674 LIB libspdk_event_vhost_blk.a 00:02:34.674 LIB libspdk_event_iobuf.a 00:02:34.674 SO libspdk_event_scheduler.so.3.0 00:02:34.674 SO libspdk_event_vfu_tgt.so.2.0 00:02:34.674 SO libspdk_event_vhost_blk.so.2.0 00:02:34.674 SO libspdk_event_vmd.so.5.0 00:02:34.674 SO libspdk_event_sock.so.4.0 00:02:34.674 SO libspdk_event_iobuf.so.2.0 00:02:34.936 SYMLINK libspdk_event_vfu_tgt.so 00:02:34.936 SYMLINK libspdk_event_scheduler.so 00:02:34.936 SYMLINK libspdk_event_vhost_blk.so 00:02:34.936 SYMLINK libspdk_event_vmd.so 00:02:34.936 SYMLINK libspdk_event_sock.so 00:02:34.936 SYMLINK libspdk_event_iobuf.so 00:02:35.197 CC module/event/subsystems/accel/accel.o 00:02:35.197 LIB libspdk_event_accel.a 00:02:35.197 SO libspdk_event_accel.so.5.0 00:02:35.523 SYMLINK libspdk_event_accel.so 00:02:35.523 CC module/event/subsystems/bdev/bdev.o 00:02:35.785 LIB libspdk_event_bdev.a 00:02:35.785 SO libspdk_event_bdev.so.5.0 00:02:35.785 SYMLINK libspdk_event_bdev.so 00:02:36.047 CC module/event/subsystems/scsi/scsi.o 00:02:36.047 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:36.047 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:36.047 CC module/event/subsystems/ublk/ublk.o 00:02:36.047 CC module/event/subsystems/nbd/nbd.o 00:02:36.309 LIB libspdk_event_nbd.a 00:02:36.309 LIB libspdk_event_scsi.a 00:02:36.309 LIB libspdk_event_ublk.a 00:02:36.309 SO libspdk_event_nbd.so.5.0 00:02:36.309 SO libspdk_event_ublk.so.2.0 00:02:36.309 SO libspdk_event_scsi.so.5.0 00:02:36.309 LIB libspdk_event_nvmf.a 00:02:36.309 SYMLINK libspdk_event_nbd.so 00:02:36.309 SO libspdk_event_nvmf.so.5.0 00:02:36.309 SYMLINK libspdk_event_scsi.so 00:02:36.309 SYMLINK libspdk_event_ublk.so 00:02:36.571 SYMLINK libspdk_event_nvmf.so 00:02:36.571 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.571 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.832 LIB libspdk_event_vhost_scsi.a 00:02:36.832 LIB libspdk_event_iscsi.a 00:02:36.832 SO libspdk_event_iscsi.so.5.0 00:02:36.832 SO libspdk_event_vhost_scsi.so.2.0 00:02:36.832 SYMLINK libspdk_event_iscsi.so 00:02:36.832 SYMLINK libspdk_event_vhost_scsi.so 00:02:37.094 SO libspdk.so.5.0 00:02:37.094 SYMLINK libspdk.so 00:02:37.354 CC app/trace_record/trace_record.o 00:02:37.354 CC app/spdk_nvme_identify/identify.o 00:02:37.354 CC app/spdk_top/spdk_top.o 00:02:37.354 TEST_HEADER include/spdk/accel.h 00:02:37.354 TEST_HEADER include/spdk/accel_module.h 00:02:37.354 TEST_HEADER include/spdk/barrier.h 00:02:37.354 TEST_HEADER include/spdk/assert.h 00:02:37.354 CXX app/trace/trace.o 00:02:37.354 TEST_HEADER include/spdk/base64.h 00:02:37.354 CC app/spdk_nvme_perf/perf.o 00:02:37.354 TEST_HEADER include/spdk/bdev.h 00:02:37.354 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.354 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.354 TEST_HEADER include/spdk/bdev_module.h 00:02:37.354 TEST_HEADER include/spdk/bit_pool.h 00:02:37.354 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.354 TEST_HEADER include/spdk/bit_array.h 00:02:37.354 TEST_HEADER include/spdk/blobfs.h 00:02:37.354 TEST_HEADER include/spdk/blob.h 00:02:37.354 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.354 CC test/rpc_client/rpc_client_test.o 00:02:37.354 TEST_HEADER include/spdk/conf.h 00:02:37.354 TEST_HEADER include/spdk/config.h 00:02:37.354 TEST_HEADER include/spdk/cpuset.h 00:02:37.354 CC app/spdk_lspci/spdk_lspci.o 00:02:37.354 TEST_HEADER include/spdk/crc64.h 00:02:37.354 TEST_HEADER include/spdk/crc16.h 00:02:37.354 TEST_HEADER include/spdk/crc32.h 00:02:37.354 TEST_HEADER include/spdk/dma.h 00:02:37.354 TEST_HEADER include/spdk/dif.h 00:02:37.354 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.354 TEST_HEADER include/spdk/env.h 00:02:37.354 TEST_HEADER include/spdk/endian.h 00:02:37.354 TEST_HEADER include/spdk/fd_group.h 00:02:37.354 TEST_HEADER include/spdk/file.h 00:02:37.354 TEST_HEADER include/spdk/event.h 00:02:37.354 TEST_HEADER include/spdk/ftl.h 00:02:37.354 TEST_HEADER include/spdk/fd.h 00:02:37.354 CC app/nvmf_tgt/nvmf_main.o 00:02:37.354 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.354 TEST_HEADER include/spdk/hexlify.h 00:02:37.354 TEST_HEADER include/spdk/histogram_data.h 00:02:37.354 TEST_HEADER include/spdk/idxd.h 00:02:37.354 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.354 TEST_HEADER include/spdk/init.h 00:02:37.354 TEST_HEADER include/spdk/ioat.h 00:02:37.354 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.354 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.354 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.354 TEST_HEADER include/spdk/json.h 00:02:37.354 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.354 CC app/spdk_tgt/spdk_tgt.o 00:02:37.354 TEST_HEADER include/spdk/log.h 00:02:37.354 TEST_HEADER include/spdk/likely.h 00:02:37.354 TEST_HEADER include/spdk/lvol.h 00:02:37.354 TEST_HEADER include/spdk/memory.h 00:02:37.354 TEST_HEADER include/spdk/mmio.h 00:02:37.354 TEST_HEADER include/spdk/nbd.h 00:02:37.354 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.354 TEST_HEADER include/spdk/notify.h 00:02:37.354 TEST_HEADER include/spdk/nvme.h 00:02:37.354 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.354 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.354 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.354 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.354 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.354 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.354 CC app/spdk_dd/spdk_dd.o 00:02:37.354 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.354 TEST_HEADER include/spdk/nvmf.h 00:02:37.354 CC app/vhost/vhost.o 00:02:37.354 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.354 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.354 TEST_HEADER include/spdk/opal_spec.h 00:02:37.354 TEST_HEADER include/spdk/opal.h 00:02:37.354 TEST_HEADER include/spdk/pci_ids.h 00:02:37.354 TEST_HEADER include/spdk/pipe.h 00:02:37.354 TEST_HEADER include/spdk/queue.h 00:02:37.354 TEST_HEADER include/spdk/reduce.h 00:02:37.354 TEST_HEADER include/spdk/rpc.h 00:02:37.354 TEST_HEADER include/spdk/scheduler.h 00:02:37.354 TEST_HEADER include/spdk/scsi.h 00:02:37.354 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.354 TEST_HEADER include/spdk/sock.h 00:02:37.354 TEST_HEADER include/spdk/stdinc.h 00:02:37.354 TEST_HEADER include/spdk/string.h 00:02:37.354 TEST_HEADER include/spdk/thread.h 00:02:37.354 TEST_HEADER include/spdk/trace.h 00:02:37.354 TEST_HEADER include/spdk/tree.h 00:02:37.354 TEST_HEADER include/spdk/trace_parser.h 00:02:37.354 TEST_HEADER include/spdk/ublk.h 00:02:37.354 TEST_HEADER include/spdk/util.h 00:02:37.354 TEST_HEADER include/spdk/uuid.h 00:02:37.354 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.354 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.354 TEST_HEADER include/spdk/version.h 00:02:37.354 TEST_HEADER include/spdk/vhost.h 00:02:37.354 TEST_HEADER include/spdk/vmd.h 00:02:37.354 TEST_HEADER include/spdk/xor.h 00:02:37.354 TEST_HEADER include/spdk/zipf.h 00:02:37.354 CXX test/cpp_headers/accel_module.o 00:02:37.354 CXX test/cpp_headers/accel.o 00:02:37.354 CXX test/cpp_headers/assert.o 00:02:37.618 CXX test/cpp_headers/barrier.o 00:02:37.618 CXX test/cpp_headers/base64.o 00:02:37.618 CXX test/cpp_headers/bdev_module.o 00:02:37.618 CXX test/cpp_headers/bdev.o 00:02:37.618 CXX test/cpp_headers/bdev_zone.o 00:02:37.618 CXX test/cpp_headers/bit_array.o 00:02:37.618 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.618 CXX test/cpp_headers/blob_bdev.o 00:02:37.618 CXX test/cpp_headers/bit_pool.o 00:02:37.618 CXX test/cpp_headers/blob.o 00:02:37.618 CXX test/cpp_headers/blobfs.o 00:02:37.618 CXX test/cpp_headers/conf.o 00:02:37.618 CXX test/cpp_headers/config.o 00:02:37.618 CXX test/cpp_headers/crc16.o 00:02:37.618 CXX test/cpp_headers/cpuset.o 00:02:37.618 CXX test/cpp_headers/crc32.o 00:02:37.618 CXX test/cpp_headers/crc64.o 00:02:37.618 CC examples/ioat/verify/verify.o 00:02:37.618 CXX test/cpp_headers/dif.o 00:02:37.618 CXX test/cpp_headers/dma.o 00:02:37.618 CXX test/cpp_headers/env_dpdk.o 00:02:37.618 CXX test/cpp_headers/env.o 00:02:37.618 CXX test/cpp_headers/endian.o 00:02:37.618 CXX test/cpp_headers/fd_group.o 00:02:37.618 CXX test/cpp_headers/event.o 00:02:37.618 CXX test/cpp_headers/fd.o 00:02:37.618 CXX test/cpp_headers/ftl.o 00:02:37.618 CXX test/cpp_headers/file.o 00:02:37.618 CXX test/cpp_headers/hexlify.o 00:02:37.618 CXX test/cpp_headers/gpt_spec.o 00:02:37.618 CXX test/cpp_headers/histogram_data.o 00:02:37.618 CC test/event/reactor_perf/reactor_perf.o 00:02:37.618 CXX test/cpp_headers/idxd.o 00:02:37.618 CXX test/cpp_headers/init.o 00:02:37.618 CXX test/cpp_headers/idxd_spec.o 00:02:37.618 CC test/app/jsoncat/jsoncat.o 00:02:37.618 CXX test/cpp_headers/ioat_spec.o 00:02:37.618 CXX test/cpp_headers/ioat.o 00:02:37.618 CC examples/accel/perf/accel_perf.o 00:02:37.618 CC examples/util/zipf/zipf.o 00:02:37.618 CXX test/cpp_headers/json.o 00:02:37.618 CXX test/cpp_headers/jsonrpc.o 00:02:37.618 CXX test/cpp_headers/likely.o 00:02:37.618 CC examples/vmd/led/led.o 00:02:37.618 CXX test/cpp_headers/iscsi_spec.o 00:02:37.618 CC test/app/histogram_perf/histogram_perf.o 00:02:37.618 CXX test/cpp_headers/log.o 00:02:37.618 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.618 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.618 CXX test/cpp_headers/memory.o 00:02:37.618 CXX test/cpp_headers/mmio.o 00:02:37.618 CXX test/cpp_headers/lvol.o 00:02:37.618 CXX test/cpp_headers/nbd.o 00:02:37.618 CXX test/cpp_headers/notify.o 00:02:37.618 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.618 CXX test/cpp_headers/nvme.o 00:02:37.618 CXX test/cpp_headers/nvme_intel.o 00:02:37.618 CC examples/nvme/reconnect/reconnect.o 00:02:37.618 CC examples/sock/hello_world/hello_sock.o 00:02:37.618 CXX test/cpp_headers/nvme_ocssd.o 00:02:37.618 CC test/event/event_perf/event_perf.o 00:02:37.618 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:37.618 CC examples/nvme/arbitration/arbitration.o 00:02:37.618 CC test/app/stub/stub.o 00:02:37.618 CXX test/cpp_headers/nvme_spec.o 00:02:37.618 CC test/event/reactor/reactor.o 00:02:37.618 CC examples/nvme/hotplug/hotplug.o 00:02:37.618 CXX test/cpp_headers/nvme_zns.o 00:02:37.618 CC test/env/vtophys/vtophys.o 00:02:37.618 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.618 CC examples/vmd/lsvmd/lsvmd.o 00:02:37.618 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.618 CC examples/nvme/hello_world/hello_world.o 00:02:37.618 CC examples/ioat/perf/perf.o 00:02:37.618 CC test/nvme/e2edp/nvme_dp.o 00:02:37.618 CXX test/cpp_headers/nvmf.o 00:02:37.618 CXX test/cpp_headers/nvmf_spec.o 00:02:37.618 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:37.618 CXX test/cpp_headers/nvmf_transport.o 00:02:37.618 CC test/env/memory/memory_ut.o 00:02:37.618 CC test/event/app_repeat/app_repeat.o 00:02:37.618 CXX test/cpp_headers/opal.o 00:02:37.618 CC test/env/pci/pci_ut.o 00:02:37.618 CXX test/cpp_headers/opal_spec.o 00:02:37.618 CC test/nvme/fdp/fdp.o 00:02:37.618 CC examples/bdev/bdevperf/bdevperf.o 00:02:37.618 CXX test/cpp_headers/pci_ids.o 00:02:37.618 CC test/nvme/compliance/nvme_compliance.o 00:02:37.618 CC examples/nvme/abort/abort.o 00:02:37.618 CXX test/cpp_headers/queue.o 00:02:37.618 CXX test/cpp_headers/pipe.o 00:02:37.618 CC examples/idxd/perf/perf.o 00:02:37.618 CXX test/cpp_headers/reduce.o 00:02:37.618 CC test/nvme/startup/startup.o 00:02:37.618 CC test/nvme/simple_copy/simple_copy.o 00:02:37.618 CXX test/cpp_headers/rpc.o 00:02:37.618 CC test/nvme/connect_stress/connect_stress.o 00:02:37.618 CC test/thread/poller_perf/poller_perf.o 00:02:37.618 CC test/nvme/aer/aer.o 00:02:37.618 CC examples/bdev/hello_world/hello_bdev.o 00:02:37.618 CXX test/cpp_headers/scheduler.o 00:02:37.618 CC test/nvme/reset/reset.o 00:02:37.618 CC test/nvme/overhead/overhead.o 00:02:37.618 CC test/nvme/reserve/reserve.o 00:02:37.618 CC test/nvme/cuse/cuse.o 00:02:37.618 CC test/nvme/err_injection/err_injection.o 00:02:37.618 CXX test/cpp_headers/scsi.o 00:02:37.618 CC test/nvme/boot_partition/boot_partition.o 00:02:37.618 CC test/app/bdev_svc/bdev_svc.o 00:02:37.618 CC examples/blob/hello_world/hello_blob.o 00:02:37.618 CC test/nvme/sgl/sgl.o 00:02:37.618 CC test/nvme/fused_ordering/fused_ordering.o 00:02:37.618 CC app/fio/nvme/fio_plugin.o 00:02:37.618 CC examples/blob/cli/blobcli.o 00:02:37.618 CC test/blobfs/mkfs/mkfs.o 00:02:37.618 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:37.618 CXX test/cpp_headers/scsi_spec.o 00:02:37.618 CC test/bdev/bdevio/bdevio.o 00:02:37.618 CC test/accel/dif/dif.o 00:02:37.618 CC test/event/scheduler/scheduler.o 00:02:37.618 CC test/dma/test_dma/test_dma.o 00:02:37.618 CC app/fio/bdev/fio_plugin.o 00:02:37.618 CC examples/thread/thread/thread_ex.o 00:02:37.618 CC examples/nvmf/nvmf/nvmf.o 00:02:37.618 CXX test/cpp_headers/sock.o 00:02:37.882 CC test/env/mem_callbacks/mem_callbacks.o 00:02:37.882 LINK spdk_lspci 00:02:37.882 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:37.882 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:37.882 CC test/lvol/esnap/esnap.o 00:02:37.882 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:37.882 LINK spdk_nvme_discover 00:02:37.882 LINK nvmf_tgt 00:02:37.882 LINK interrupt_tgt 00:02:37.882 LINK rpc_client_test 00:02:38.145 LINK spdk_trace_record 00:02:38.145 LINK spdk_tgt 00:02:38.145 LINK iscsi_tgt 00:02:38.145 LINK reactor_perf 00:02:38.145 LINK vhost 00:02:38.145 LINK vtophys 00:02:38.145 LINK led 00:02:38.145 LINK poller_perf 00:02:38.145 LINK lsvmd 00:02:38.145 LINK event_perf 00:02:38.145 LINK zipf 00:02:38.145 LINK reactor 00:02:38.145 LINK env_dpdk_post_init 00:02:38.145 LINK jsoncat 00:02:38.145 LINK histogram_perf 00:02:38.145 LINK pmr_persistence 00:02:38.145 LINK verify 00:02:38.145 LINK startup 00:02:38.145 LINK app_repeat 00:02:38.145 LINK connect_stress 00:02:38.145 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:38.405 LINK reserve 00:02:38.405 LINK fused_ordering 00:02:38.405 LINK stub 00:02:38.405 LINK boot_partition 00:02:38.405 LINK doorbell_aers 00:02:38.405 LINK err_injection 00:02:38.405 CXX test/cpp_headers/stdinc.o 00:02:38.405 CXX test/cpp_headers/string.o 00:02:38.405 CXX test/cpp_headers/thread.o 00:02:38.405 LINK bdev_svc 00:02:38.405 CXX test/cpp_headers/trace.o 00:02:38.405 CXX test/cpp_headers/trace_parser.o 00:02:38.405 LINK ioat_perf 00:02:38.405 CXX test/cpp_headers/tree.o 00:02:38.405 CXX test/cpp_headers/ublk.o 00:02:38.405 CXX test/cpp_headers/util.o 00:02:38.405 LINK hello_world 00:02:38.405 CXX test/cpp_headers/uuid.o 00:02:38.405 CXX test/cpp_headers/version.o 00:02:38.405 LINK cmb_copy 00:02:38.405 CXX test/cpp_headers/vfio_user_pci.o 00:02:38.405 CXX test/cpp_headers/vfio_user_spec.o 00:02:38.405 LINK mkfs 00:02:38.405 CXX test/cpp_headers/vhost.o 00:02:38.405 CXX test/cpp_headers/vmd.o 00:02:38.405 CXX test/cpp_headers/xor.o 00:02:38.405 CXX test/cpp_headers/zipf.o 00:02:38.405 LINK hello_sock 00:02:38.405 LINK simple_copy 00:02:38.405 LINK hotplug 00:02:38.405 LINK hello_blob 00:02:38.405 LINK hello_bdev 00:02:38.405 LINK scheduler 00:02:38.406 LINK nvme_dp 00:02:38.406 LINK spdk_dd 00:02:38.406 LINK overhead 00:02:38.406 LINK sgl 00:02:38.406 LINK reset 00:02:38.406 LINK thread 00:02:38.406 LINK nvme_compliance 00:02:38.406 LINK aer 00:02:38.406 LINK spdk_trace 00:02:38.406 LINK arbitration 00:02:38.406 LINK reconnect 00:02:38.406 LINK fdp 00:02:38.406 LINK abort 00:02:38.406 LINK nvmf 00:02:38.406 LINK pci_ut 00:02:38.406 LINK idxd_perf 00:02:38.664 LINK bdevio 00:02:38.664 LINK dif 00:02:38.664 LINK accel_perf 00:02:38.664 LINK test_dma 00:02:38.664 LINK nvme_fuzz 00:02:38.664 LINK blobcli 00:02:38.664 LINK nvme_manage 00:02:38.664 LINK spdk_bdev 00:02:38.664 LINK spdk_nvme 00:02:38.664 LINK spdk_nvme_identify 00:02:38.664 LINK spdk_nvme_perf 00:02:38.664 LINK mem_callbacks 00:02:38.664 LINK vhost_fuzz 00:02:38.924 LINK spdk_top 00:02:38.924 LINK bdevperf 00:02:38.924 LINK memory_ut 00:02:39.185 LINK cuse 00:02:39.758 LINK iscsi_fuzz 00:02:42.304 LINK esnap 00:02:42.304 00:02:42.304 real 0m33.132s 00:02:42.304 user 5m4.115s 00:02:42.304 sys 3m0.984s 00:02:42.304 11:56:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:42.304 11:56:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.304 ************************************ 00:02:42.304 END TEST make 00:02:42.304 ************************************ 00:02:42.304 11:56:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:42.304 11:56:55 -- nvmf/common.sh@7 -- # uname -s 00:02:42.304 11:56:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:42.304 11:56:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:42.304 11:56:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:42.304 11:56:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:42.304 11:56:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:42.304 11:56:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:42.304 11:56:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:42.304 11:56:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:42.304 11:56:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:42.304 11:56:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:42.304 11:56:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:42.304 11:56:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:42.304 11:56:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:42.304 11:56:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:42.304 11:56:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:42.304 11:56:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:42.304 11:56:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:42.304 11:56:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.304 11:56:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.304 11:56:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.304 11:56:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.304 11:56:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.304 11:56:55 -- paths/export.sh@5 -- # export PATH 00:02:42.304 11:56:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.304 11:56:55 -- nvmf/common.sh@46 -- # : 0 00:02:42.304 11:56:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:42.304 11:56:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:42.304 11:56:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:42.304 11:56:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:42.304 11:56:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:42.304 11:56:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:42.304 11:56:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:42.304 11:56:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:42.304 11:56:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:42.304 11:56:55 -- spdk/autotest.sh@32 -- # uname -s 00:02:42.304 11:56:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:42.304 11:56:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:42.304 11:56:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.304 11:56:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:42.304 11:56:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.304 11:56:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:42.304 11:56:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:42.304 11:56:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:42.304 11:56:55 -- spdk/autotest.sh@48 -- # udevadm_pid=1203152 00:02:42.304 11:56:55 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:42.304 11:56:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:42.304 11:56:55 -- spdk/autotest.sh@54 -- # echo 1203154 00:02:42.304 11:56:55 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:42.304 11:56:55 -- spdk/autotest.sh@56 -- # echo 1203155 00:02:42.304 11:56:55 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:42.304 11:56:55 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:42.304 11:56:55 -- spdk/autotest.sh@60 -- # echo 1203156 00:02:42.305 11:56:55 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:42.305 11:56:55 -- spdk/autotest.sh@62 -- # echo 1203158 00:02:42.305 11:56:55 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.305 11:56:55 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:42.305 11:56:55 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:42.305 11:56:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:42.305 11:56:55 -- common/autotest_common.sh@10 -- # set +x 00:02:42.305 11:56:55 -- spdk/autotest.sh@70 -- # create_test_list 00:02:42.305 11:56:55 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:42.305 11:56:55 -- common/autotest_common.sh@10 -- # set +x 00:02:42.305 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:42.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:42.565 11:56:55 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:42.565 11:56:55 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.565 11:56:55 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.565 11:56:55 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:42.565 11:56:55 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.565 11:56:55 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:42.565 11:56:55 -- common/autotest_common.sh@1440 -- # uname 00:02:42.565 11:56:55 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:42.565 11:56:55 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:42.565 11:56:55 -- common/autotest_common.sh@1460 -- # uname 00:02:42.565 11:56:55 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:42.565 11:56:55 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:42.565 11:56:55 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:42.565 11:56:55 -- spdk/autotest.sh@83 -- # hash lcov 00:02:42.565 11:56:55 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:42.565 11:56:55 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:42.565 --rc lcov_branch_coverage=1 00:02:42.565 --rc lcov_function_coverage=1 00:02:42.565 --rc genhtml_branch_coverage=1 00:02:42.565 --rc genhtml_function_coverage=1 00:02:42.565 --rc genhtml_legend=1 00:02:42.565 --rc geninfo_all_blocks=1 00:02:42.565 ' 00:02:42.565 11:56:55 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:42.565 --rc lcov_branch_coverage=1 00:02:42.565 --rc lcov_function_coverage=1 00:02:42.565 --rc genhtml_branch_coverage=1 00:02:42.565 --rc genhtml_function_coverage=1 00:02:42.565 --rc genhtml_legend=1 00:02:42.565 --rc geninfo_all_blocks=1 00:02:42.565 ' 00:02:42.565 11:56:55 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:42.565 --rc lcov_branch_coverage=1 00:02:42.565 --rc lcov_function_coverage=1 00:02:42.565 --rc genhtml_branch_coverage=1 00:02:42.565 --rc genhtml_function_coverage=1 00:02:42.565 --rc genhtml_legend=1 00:02:42.565 --rc geninfo_all_blocks=1 00:02:42.565 --no-external' 00:02:42.565 11:56:55 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:42.565 --rc lcov_branch_coverage=1 00:02:42.565 --rc lcov_function_coverage=1 00:02:42.565 --rc genhtml_branch_coverage=1 00:02:42.565 --rc genhtml_function_coverage=1 00:02:42.566 --rc genhtml_legend=1 00:02:42.566 --rc geninfo_all_blocks=1 00:02:42.566 --no-external' 00:02:42.566 11:56:55 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:42.566 lcov: LCOV version 1.14 00:02:42.566 11:56:55 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:54.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:54.796 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:54.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:54.796 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:54.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:54.796 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:07.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:07.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:07.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:07.035 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:08.423 11:57:21 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:08.423 11:57:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:08.423 11:57:21 -- common/autotest_common.sh@10 -- # set +x 00:03:08.423 11:57:21 -- spdk/autotest.sh@102 -- # rm -f 00:03:08.423 11:57:21 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.730 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:11.730 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.730 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.991 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.992 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.992 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.992 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.992 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.992 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.992 11:57:24 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:11.992 11:57:24 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:11.992 11:57:24 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:11.992 11:57:24 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:11.992 11:57:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:11.992 11:57:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:11.992 11:57:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:11.992 11:57:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.992 11:57:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:11.992 11:57:24 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:11.992 11:57:24 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:11.992 11:57:24 -- spdk/autotest.sh@121 -- # grep -v p 00:03:11.992 11:57:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:11.992 11:57:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:11.992 11:57:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:11.992 11:57:24 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:11.992 11:57:24 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:11.992 No valid GPT data, bailing 00:03:11.992 11:57:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:11.992 11:57:24 -- scripts/common.sh@393 -- # pt= 00:03:11.992 11:57:24 -- scripts/common.sh@394 -- # return 1 00:03:11.992 11:57:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:11.992 1+0 records in 00:03:11.992 1+0 records out 00:03:11.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420516 s, 249 MB/s 00:03:11.992 11:57:24 -- spdk/autotest.sh@129 -- # sync 00:03:11.992 11:57:24 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.992 11:57:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.992 11:57:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:20.131 11:57:32 -- spdk/autotest.sh@135 -- # uname -s 00:03:20.131 11:57:32 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:20.131 11:57:32 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:20.131 11:57:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:20.131 11:57:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:20.131 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:03:20.131 ************************************ 00:03:20.131 START TEST setup.sh 00:03:20.131 ************************************ 00:03:20.131 11:57:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:20.131 * Looking for test storage... 00:03:20.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.131 11:57:32 -- setup/test-setup.sh@10 -- # uname -s 00:03:20.131 11:57:32 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:20.131 11:57:32 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:20.131 11:57:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:20.131 11:57:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:20.131 11:57:32 -- common/autotest_common.sh@10 -- # set +x 00:03:20.131 ************************************ 00:03:20.131 START TEST acl 00:03:20.131 ************************************ 00:03:20.132 11:57:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:20.132 * Looking for test storage... 00:03:20.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.132 11:57:33 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:20.132 11:57:33 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:20.132 11:57:33 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:20.132 11:57:33 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:20.132 11:57:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:20.132 11:57:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:20.132 11:57:33 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:20.132 11:57:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.132 11:57:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:20.132 11:57:33 -- setup/acl.sh@12 -- # devs=() 00:03:20.132 11:57:33 -- setup/acl.sh@12 -- # declare -a devs 00:03:20.132 11:57:33 -- setup/acl.sh@13 -- # drivers=() 00:03:20.132 11:57:33 -- setup/acl.sh@13 -- # declare -A drivers 00:03:20.132 11:57:33 -- setup/acl.sh@51 -- # setup reset 00:03:20.132 11:57:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.132 11:57:33 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.339 11:57:36 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:24.339 11:57:36 -- setup/acl.sh@16 -- # local dev driver 00:03:24.339 11:57:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.339 11:57:36 -- setup/acl.sh@15 -- # setup output status 00:03:24.339 11:57:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.339 11:57:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:26.955 Hugepages 00:03:26.955 node hugesize free / total 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 00:03:26.955 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:26.955 11:57:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.955 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.955 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:26.955 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.217 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:27.217 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.217 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:27.217 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.217 11:57:39 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:27.217 11:57:39 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.217 11:57:39 -- setup/acl.sh@20 -- # continue 00:03:27.217 11:57:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.217 11:57:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:27.217 11:57:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.217 11:57:40 -- setup/acl.sh@20 -- # continue 00:03:27.217 11:57:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.217 11:57:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:27.217 11:57:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.217 11:57:40 -- setup/acl.sh@20 -- # continue 00:03:27.217 11:57:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.217 11:57:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:27.217 11:57:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.217 11:57:40 -- setup/acl.sh@20 -- # continue 00:03:27.217 11:57:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.217 11:57:40 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:27.217 11:57:40 -- setup/acl.sh@54 -- # run_test denied denied 00:03:27.217 11:57:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:27.217 11:57:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:27.217 11:57:40 -- common/autotest_common.sh@10 -- # set +x 00:03:27.217 ************************************ 00:03:27.217 START TEST denied 00:03:27.217 ************************************ 00:03:27.217 11:57:40 -- common/autotest_common.sh@1104 -- # denied 00:03:27.217 11:57:40 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:27.217 11:57:40 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:27.217 11:57:40 -- setup/acl.sh@38 -- # setup output config 00:03:27.217 11:57:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.217 11:57:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.421 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:31.421 11:57:43 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:31.421 11:57:43 -- setup/acl.sh@28 -- # local dev driver 00:03:31.421 11:57:43 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.421 11:57:43 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:31.421 11:57:43 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:31.421 11:57:43 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.421 11:57:43 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.421 11:57:43 -- setup/acl.sh@41 -- # setup reset 00:03:31.421 11:57:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.421 11:57:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.634 00:03:35.634 real 0m8.032s 00:03:35.634 user 0m2.658s 00:03:35.634 sys 0m4.648s 00:03:35.634 11:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.634 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:03:35.634 ************************************ 00:03:35.634 END TEST denied 00:03:35.634 ************************************ 00:03:35.634 11:57:48 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:35.634 11:57:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.634 11:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.634 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:03:35.634 ************************************ 00:03:35.634 START TEST allowed 00:03:35.634 ************************************ 00:03:35.634 11:57:48 -- common/autotest_common.sh@1104 -- # allowed 00:03:35.634 11:57:48 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:35.634 11:57:48 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:35.634 11:57:48 -- setup/acl.sh@45 -- # setup output config 00:03:35.634 11:57:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.634 11:57:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.927 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.927 11:57:53 -- setup/acl.sh@47 -- # verify 00:03:40.927 11:57:53 -- setup/acl.sh@28 -- # local dev driver 00:03:40.927 11:57:53 -- setup/acl.sh@48 -- # setup reset 00:03:40.927 11:57:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.927 11:57:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.136 00:03:45.136 real 0m9.303s 00:03:45.136 user 0m2.685s 00:03:45.136 sys 0m4.925s 00:03:45.136 11:57:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.136 11:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:45.136 ************************************ 00:03:45.136 END TEST allowed 00:03:45.136 ************************************ 00:03:45.136 00:03:45.136 real 0m24.517s 00:03:45.136 user 0m7.908s 00:03:45.136 sys 0m14.342s 00:03:45.136 11:57:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.136 11:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:45.136 ************************************ 00:03:45.136 END TEST acl 00:03:45.136 ************************************ 00:03:45.136 11:57:57 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.136 11:57:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.136 11:57:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.136 11:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:45.136 ************************************ 00:03:45.136 START TEST hugepages 00:03:45.136 ************************************ 00:03:45.136 11:57:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.136 * Looking for test storage... 00:03:45.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.136 11:57:57 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.136 11:57:57 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.136 11:57:57 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.136 11:57:57 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.136 11:57:57 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.136 11:57:57 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.136 11:57:57 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.136 11:57:57 -- setup/common.sh@18 -- # local node= 00:03:45.136 11:57:57 -- setup/common.sh@19 -- # local var val 00:03:45.136 11:57:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.136 11:57:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.136 11:57:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.136 11:57:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.136 11:57:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.136 11:57:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.136 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 106982344 kB' 'MemAvailable: 110178888 kB' 'Buffers: 4132 kB' 'Cached: 10728476 kB' 'SwapCached: 0 kB' 'Active: 7717840 kB' 'Inactive: 3495648 kB' 'Active(anon): 7329176 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484272 kB' 'Mapped: 167980 kB' 'Shmem: 6848296 kB' 'KReclaimable: 273128 kB' 'Slab: 939568 kB' 'SReclaimable: 273128 kB' 'SUnreclaim: 666440 kB' 'KernelStack: 27184 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8825232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234904 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.137 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.137 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # continue 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.138 11:57:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.138 11:57:57 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.138 11:57:57 -- setup/common.sh@33 -- # echo 2048 00:03:45.138 11:57:57 -- setup/common.sh@33 -- # return 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.138 11:57:57 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.138 11:57:57 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.138 11:57:57 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.138 11:57:57 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.138 11:57:57 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.138 11:57:57 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.138 11:57:57 -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.138 11:57:57 -- setup/hugepages.sh@27 -- # local node 00:03:45.138 11:57:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.138 11:57:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.138 11:57:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.138 11:57:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.138 11:57:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.138 11:57:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.138 11:57:57 -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.138 11:57:57 -- setup/hugepages.sh@37 -- # local node hp 00:03:45.138 11:57:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.138 11:57:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.138 11:57:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.138 11:57:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.138 11:57:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.138 11:57:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.138 11:57:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.138 11:57:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.138 11:57:57 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.138 11:57:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.138 11:57:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.138 11:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:45.138 ************************************ 00:03:45.138 START TEST default_setup 00:03:45.138 ************************************ 00:03:45.138 11:57:57 -- common/autotest_common.sh@1104 -- # default_setup 00:03:45.138 11:57:57 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.138 11:57:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.138 11:57:57 -- setup/hugepages.sh@51 -- # shift 00:03:45.138 11:57:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.138 11:57:57 -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.138 11:57:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.138 11:57:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.138 11:57:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.138 11:57:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.138 11:57:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.138 11:57:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.138 11:57:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.138 11:57:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.138 11:57:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.138 11:57:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.138 11:57:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.138 11:57:57 -- setup/hugepages.sh@73 -- # return 0 00:03:45.138 11:57:57 -- setup/hugepages.sh@137 -- # setup output 00:03:45.138 11:57:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.138 11:57:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.441 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.441 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:48.441 11:58:01 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:48.441 11:58:01 -- setup/hugepages.sh@89 -- # local node 00:03:48.441 11:58:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.441 11:58:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.441 11:58:01 -- setup/hugepages.sh@92 -- # local surp 00:03:48.441 11:58:01 -- setup/hugepages.sh@93 -- # local resv 00:03:48.441 11:58:01 -- setup/hugepages.sh@94 -- # local anon 00:03:48.441 11:58:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.441 11:58:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.441 11:58:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.441 11:58:01 -- setup/common.sh@18 -- # local node= 00:03:48.441 11:58:01 -- setup/common.sh@19 -- # local var val 00:03:48.441 11:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.441 11:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.441 11:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.441 11:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.441 11:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.441 11:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.441 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.441 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109145956 kB' 'MemAvailable: 112342516 kB' 'Buffers: 4132 kB' 'Cached: 10728604 kB' 'SwapCached: 0 kB' 'Active: 7733180 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344516 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498984 kB' 'Mapped: 167684 kB' 'Shmem: 6848424 kB' 'KReclaimable: 273160 kB' 'Slab: 937544 kB' 'SReclaimable: 273160 kB' 'SUnreclaim: 664384 kB' 'KernelStack: 27168 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8839004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234968 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.442 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.442 11:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.442 11:58:01 -- setup/common.sh@33 -- # echo 0 00:03:48.442 11:58:01 -- setup/common.sh@33 -- # return 0 00:03:48.442 11:58:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.443 11:58:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.443 11:58:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.443 11:58:01 -- setup/common.sh@18 -- # local node= 00:03:48.443 11:58:01 -- setup/common.sh@19 -- # local var val 00:03:48.443 11:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.443 11:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.443 11:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.443 11:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.443 11:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.443 11:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109158464 kB' 'MemAvailable: 112354988 kB' 'Buffers: 4132 kB' 'Cached: 10728608 kB' 'SwapCached: 0 kB' 'Active: 7733088 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344424 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499420 kB' 'Mapped: 167732 kB' 'Shmem: 6848428 kB' 'KReclaimable: 273088 kB' 'Slab: 937428 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664340 kB' 'KernelStack: 27216 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8839016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234936 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.443 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.443 11:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.444 11:58:01 -- setup/common.sh@33 -- # echo 0 00:03:48.444 11:58:01 -- setup/common.sh@33 -- # return 0 00:03:48.444 11:58:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.444 11:58:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.444 11:58:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.444 11:58:01 -- setup/common.sh@18 -- # local node= 00:03:48.444 11:58:01 -- setup/common.sh@19 -- # local var val 00:03:48.444 11:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.444 11:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.444 11:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.444 11:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.444 11:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.444 11:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109158648 kB' 'MemAvailable: 112355172 kB' 'Buffers: 4132 kB' 'Cached: 10728620 kB' 'SwapCached: 0 kB' 'Active: 7732724 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344060 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499020 kB' 'Mapped: 167672 kB' 'Shmem: 6848440 kB' 'KReclaimable: 273088 kB' 'Slab: 937472 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664384 kB' 'KernelStack: 27184 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8839036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234936 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.444 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.444 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.445 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.445 11:58:01 -- setup/common.sh@33 -- # echo 0 00:03:48.445 11:58:01 -- setup/common.sh@33 -- # return 0 00:03:48.445 11:58:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.445 11:58:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.445 nr_hugepages=1024 00:03:48.445 11:58:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.445 resv_hugepages=0 00:03:48.445 11:58:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.445 surplus_hugepages=0 00:03:48.445 11:58:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.445 anon_hugepages=0 00:03:48.445 11:58:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.445 11:58:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.445 11:58:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.445 11:58:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.445 11:58:01 -- setup/common.sh@18 -- # local node= 00:03:48.445 11:58:01 -- setup/common.sh@19 -- # local var val 00:03:48.445 11:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.445 11:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.445 11:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.445 11:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.445 11:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.445 11:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.445 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109159032 kB' 'MemAvailable: 112355556 kB' 'Buffers: 4132 kB' 'Cached: 10728652 kB' 'SwapCached: 0 kB' 'Active: 7732384 kB' 'Inactive: 3495648 kB' 'Active(anon): 7343720 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498608 kB' 'Mapped: 167672 kB' 'Shmem: 6848472 kB' 'KReclaimable: 273088 kB' 'Slab: 937472 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664384 kB' 'KernelStack: 27168 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8839184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234936 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.446 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.446 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.447 11:58:01 -- setup/common.sh@33 -- # echo 1024 00:03:48.447 11:58:01 -- setup/common.sh@33 -- # return 0 00:03:48.447 11:58:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.447 11:58:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.447 11:58:01 -- setup/hugepages.sh@27 -- # local node 00:03:48.447 11:58:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.447 11:58:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.447 11:58:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.447 11:58:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.447 11:58:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.447 11:58:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.447 11:58:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.447 11:58:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.447 11:58:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.447 11:58:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.447 11:58:01 -- setup/common.sh@18 -- # local node=0 00:03:48.447 11:58:01 -- setup/common.sh@19 -- # local var val 00:03:48.447 11:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.447 11:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.447 11:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.447 11:58:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.447 11:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.447 11:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59918676 kB' 'MemUsed: 5740332 kB' 'SwapCached: 0 kB' 'Active: 1388012 kB' 'Inactive: 204208 kB' 'Active(anon): 1219956 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1432980 kB' 'Mapped: 96824 kB' 'AnonPages: 162528 kB' 'Shmem: 1060716 kB' 'KernelStack: 14824 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158064 kB' 'Slab: 483036 kB' 'SReclaimable: 158064 kB' 'SUnreclaim: 324972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.447 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.447 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.448 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.448 11:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.709 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.709 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # continue 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.710 11:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.710 11:58:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.710 11:58:01 -- setup/common.sh@33 -- # echo 0 00:03:48.710 11:58:01 -- setup/common.sh@33 -- # return 0 00:03:48.710 11:58:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.710 11:58:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.710 11:58:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.710 11:58:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.710 11:58:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.710 node0=1024 expecting 1024 00:03:48.710 11:58:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.710 00:03:48.710 real 0m3.846s 00:03:48.710 user 0m1.491s 00:03:48.710 sys 0m2.348s 00:03:48.710 11:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.710 11:58:01 -- common/autotest_common.sh@10 -- # set +x 00:03:48.710 ************************************ 00:03:48.710 END TEST default_setup 00:03:48.710 ************************************ 00:03:48.710 11:58:01 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:48.710 11:58:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.710 11:58:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.710 11:58:01 -- common/autotest_common.sh@10 -- # set +x 00:03:48.710 ************************************ 00:03:48.710 START TEST per_node_1G_alloc 00:03:48.710 ************************************ 00:03:48.710 11:58:01 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:48.710 11:58:01 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:48.710 11:58:01 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:48.710 11:58:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.710 11:58:01 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:48.710 11:58:01 -- setup/hugepages.sh@51 -- # shift 00:03:48.710 11:58:01 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:48.710 11:58:01 -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.710 11:58:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.710 11:58:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.710 11:58:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:48.710 11:58:01 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:48.710 11:58:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.710 11:58:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.710 11:58:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.710 11:58:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.710 11:58:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.710 11:58:01 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:48.710 11:58:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.710 11:58:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:48.710 11:58:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.710 11:58:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:48.710 11:58:01 -- setup/hugepages.sh@73 -- # return 0 00:03:48.710 11:58:01 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:48.710 11:58:01 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:48.710 11:58:01 -- setup/hugepages.sh@146 -- # setup output 00:03:48.710 11:58:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.710 11:58:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.015 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:52.015 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.015 11:58:04 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:52.015 11:58:04 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.015 11:58:04 -- setup/hugepages.sh@89 -- # local node 00:03:52.015 11:58:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.015 11:58:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.015 11:58:04 -- setup/hugepages.sh@92 -- # local surp 00:03:52.015 11:58:04 -- setup/hugepages.sh@93 -- # local resv 00:03:52.015 11:58:04 -- setup/hugepages.sh@94 -- # local anon 00:03:52.015 11:58:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.015 11:58:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.015 11:58:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.015 11:58:04 -- setup/common.sh@18 -- # local node= 00:03:52.015 11:58:04 -- setup/common.sh@19 -- # local var val 00:03:52.015 11:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.015 11:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.015 11:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.015 11:58:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.015 11:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.015 11:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109160904 kB' 'MemAvailable: 112357428 kB' 'Buffers: 4132 kB' 'Cached: 10728732 kB' 'SwapCached: 0 kB' 'Active: 7731964 kB' 'Inactive: 3495648 kB' 'Active(anon): 7343300 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498160 kB' 'Mapped: 166720 kB' 'Shmem: 6848552 kB' 'KReclaimable: 273088 kB' 'Slab: 937772 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664684 kB' 'KernelStack: 27152 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8826524 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234984 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.015 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.015 11:58:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 11:58:04 -- setup/common.sh@33 -- # echo 0 00:03:52.016 11:58:04 -- setup/common.sh@33 -- # return 0 00:03:52.016 11:58:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.016 11:58:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.016 11:58:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.016 11:58:04 -- setup/common.sh@18 -- # local node= 00:03:52.016 11:58:04 -- setup/common.sh@19 -- # local var val 00:03:52.016 11:58:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.016 11:58:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.016 11:58:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.016 11:58:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.016 11:58:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.016 11:58:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109162016 kB' 'MemAvailable: 112358540 kB' 'Buffers: 4132 kB' 'Cached: 10728740 kB' 'SwapCached: 0 kB' 'Active: 7732492 kB' 'Inactive: 3495648 kB' 'Active(anon): 7343828 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498744 kB' 'Mapped: 166724 kB' 'Shmem: 6848560 kB' 'KReclaimable: 273088 kB' 'Slab: 937732 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664644 kB' 'KernelStack: 27136 kB' 'PageTables: 7680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8826668 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234952 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.016 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.016 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:04 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 11:58:05 -- setup/common.sh@33 -- # echo 0 00:03:52.018 11:58:05 -- setup/common.sh@33 -- # return 0 00:03:52.018 11:58:05 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.018 11:58:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.018 11:58:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.018 11:58:05 -- setup/common.sh@18 -- # local node= 00:03:52.018 11:58:05 -- setup/common.sh@19 -- # local var val 00:03:52.018 11:58:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.018 11:58:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.018 11:58:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.018 11:58:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.018 11:58:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.018 11:58:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109162468 kB' 'MemAvailable: 112358992 kB' 'Buffers: 4132 kB' 'Cached: 10728744 kB' 'SwapCached: 0 kB' 'Active: 7731612 kB' 'Inactive: 3495648 kB' 'Active(anon): 7342948 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497788 kB' 'Mapped: 166620 kB' 'Shmem: 6848564 kB' 'KReclaimable: 273088 kB' 'Slab: 937724 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664636 kB' 'KernelStack: 27168 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8826684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234952 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 11:58:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 11:58:05 -- setup/common.sh@33 -- # echo 0 00:03:52.019 11:58:05 -- setup/common.sh@33 -- # return 0 00:03:52.019 11:58:05 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.019 11:58:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.019 nr_hugepages=1024 00:03:52.019 11:58:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.019 resv_hugepages=0 00:03:52.019 11:58:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.019 surplus_hugepages=0 00:03:52.019 11:58:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.019 anon_hugepages=0 00:03:52.019 11:58:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.019 11:58:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.019 11:58:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.019 11:58:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.019 11:58:05 -- setup/common.sh@18 -- # local node= 00:03:52.019 11:58:05 -- setup/common.sh@19 -- # local var val 00:03:52.019 11:58:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.019 11:58:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.019 11:58:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.019 11:58:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.019 11:58:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.019 11:58:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.282 11:58:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109163092 kB' 'MemAvailable: 112359616 kB' 'Buffers: 4132 kB' 'Cached: 10728776 kB' 'SwapCached: 0 kB' 'Active: 7731656 kB' 'Inactive: 3495648 kB' 'Active(anon): 7342992 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497788 kB' 'Mapped: 166620 kB' 'Shmem: 6848596 kB' 'KReclaimable: 273088 kB' 'Slab: 937724 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664636 kB' 'KernelStack: 27168 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8826704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234952 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:52.282 11:58:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.282 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.282 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.282 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.282 11:58:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.282 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.282 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.282 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.282 11:58:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.282 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.282 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.282 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.282 11:58:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.283 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.283 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.284 11:58:05 -- setup/common.sh@33 -- # echo 1024 00:03:52.284 11:58:05 -- setup/common.sh@33 -- # return 0 00:03:52.284 11:58:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.284 11:58:05 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.284 11:58:05 -- setup/hugepages.sh@27 -- # local node 00:03:52.284 11:58:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.284 11:58:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.284 11:58:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.284 11:58:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.284 11:58:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.284 11:58:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.284 11:58:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.284 11:58:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.284 11:58:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.284 11:58:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.284 11:58:05 -- setup/common.sh@18 -- # local node=0 00:03:52.284 11:58:05 -- setup/common.sh@19 -- # local var val 00:03:52.284 11:58:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.284 11:58:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.284 11:58:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.284 11:58:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.284 11:58:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.284 11:58:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60958548 kB' 'MemUsed: 4700460 kB' 'SwapCached: 0 kB' 'Active: 1386200 kB' 'Inactive: 204208 kB' 'Active(anon): 1218144 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1433000 kB' 'Mapped: 96024 kB' 'AnonPages: 160600 kB' 'Shmem: 1060736 kB' 'KernelStack: 14792 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158064 kB' 'Slab: 483028 kB' 'SReclaimable: 158064 kB' 'SUnreclaim: 324964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.284 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.284 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@33 -- # echo 0 00:03:52.285 11:58:05 -- setup/common.sh@33 -- # return 0 00:03:52.285 11:58:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.285 11:58:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.285 11:58:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.285 11:58:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.285 11:58:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.285 11:58:05 -- setup/common.sh@18 -- # local node=1 00:03:52.285 11:58:05 -- setup/common.sh@19 -- # local var val 00:03:52.285 11:58:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.285 11:58:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.285 11:58:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.285 11:58:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.285 11:58:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.285 11:58:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 48203312 kB' 'MemUsed: 12476532 kB' 'SwapCached: 0 kB' 'Active: 6345760 kB' 'Inactive: 3291440 kB' 'Active(anon): 6125152 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9299940 kB' 'Mapped: 70596 kB' 'AnonPages: 337520 kB' 'Shmem: 5787892 kB' 'KernelStack: 12376 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115024 kB' 'Slab: 454696 kB' 'SReclaimable: 115024 kB' 'SUnreclaim: 339672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.285 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.285 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # continue 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.286 11:58:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.286 11:58:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.286 11:58:05 -- setup/common.sh@33 -- # echo 0 00:03:52.286 11:58:05 -- setup/common.sh@33 -- # return 0 00:03:52.286 11:58:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.286 11:58:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.286 11:58:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.286 11:58:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.286 node0=512 expecting 512 00:03:52.286 11:58:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.286 11:58:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.286 11:58:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.286 11:58:05 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.286 node1=512 expecting 512 00:03:52.286 11:58:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.286 00:03:52.286 real 0m3.597s 00:03:52.286 user 0m1.445s 00:03:52.286 sys 0m2.214s 00:03:52.286 11:58:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.286 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:03:52.286 ************************************ 00:03:52.286 END TEST per_node_1G_alloc 00:03:52.286 ************************************ 00:03:52.286 11:58:05 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:52.286 11:58:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.286 11:58:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.286 11:58:05 -- common/autotest_common.sh@10 -- # set +x 00:03:52.286 ************************************ 00:03:52.286 START TEST even_2G_alloc 00:03:52.286 ************************************ 00:03:52.286 11:58:05 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:52.286 11:58:05 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:52.286 11:58:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.286 11:58:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.286 11:58:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.286 11:58:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.286 11:58:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.286 11:58:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.286 11:58:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.286 11:58:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.286 11:58:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.286 11:58:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.286 11:58:05 -- setup/hugepages.sh@83 -- # : 512 00:03:52.286 11:58:05 -- setup/hugepages.sh@84 -- # : 1 00:03:52.286 11:58:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.286 11:58:05 -- setup/hugepages.sh@83 -- # : 0 00:03:52.286 11:58:05 -- setup/hugepages.sh@84 -- # : 0 00:03:52.286 11:58:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.286 11:58:05 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.286 11:58:05 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.286 11:58:05 -- setup/hugepages.sh@153 -- # setup output 00:03:52.286 11:58:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.286 11:58:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.591 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.591 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.591 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.855 11:58:08 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:55.855 11:58:08 -- setup/hugepages.sh@89 -- # local node 00:03:55.855 11:58:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.855 11:58:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.855 11:58:08 -- setup/hugepages.sh@92 -- # local surp 00:03:55.855 11:58:08 -- setup/hugepages.sh@93 -- # local resv 00:03:55.855 11:58:08 -- setup/hugepages.sh@94 -- # local anon 00:03:55.855 11:58:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.855 11:58:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.855 11:58:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.855 11:58:08 -- setup/common.sh@18 -- # local node= 00:03:55.855 11:58:08 -- setup/common.sh@19 -- # local var val 00:03:55.855 11:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.855 11:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.855 11:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.855 11:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.855 11:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.855 11:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 11:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109177812 kB' 'MemAvailable: 112374336 kB' 'Buffers: 4132 kB' 'Cached: 10728888 kB' 'SwapCached: 0 kB' 'Active: 7733276 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344612 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498744 kB' 'Mapped: 166712 kB' 'Shmem: 6848708 kB' 'KReclaimable: 273088 kB' 'Slab: 937584 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664496 kB' 'KernelStack: 27184 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8828020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234936 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.855 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.855 11:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.856 11:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.856 11:58:08 -- setup/common.sh@33 -- # echo 0 00:03:55.856 11:58:08 -- setup/common.sh@33 -- # return 0 00:03:55.856 11:58:08 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.856 11:58:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.856 11:58:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.856 11:58:08 -- setup/common.sh@18 -- # local node= 00:03:55.856 11:58:08 -- setup/common.sh@19 -- # local var val 00:03:55.856 11:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.856 11:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.856 11:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.856 11:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.856 11:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.856 11:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.856 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109178552 kB' 'MemAvailable: 112375076 kB' 'Buffers: 4132 kB' 'Cached: 10728892 kB' 'SwapCached: 0 kB' 'Active: 7733028 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344364 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498972 kB' 'Mapped: 166632 kB' 'Shmem: 6848712 kB' 'KReclaimable: 273088 kB' 'Slab: 937528 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664440 kB' 'KernelStack: 27200 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8828404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234936 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.857 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.857 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.858 11:58:08 -- setup/common.sh@33 -- # echo 0 00:03:55.858 11:58:08 -- setup/common.sh@33 -- # return 0 00:03:55.858 11:58:08 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.858 11:58:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.858 11:58:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.858 11:58:08 -- setup/common.sh@18 -- # local node= 00:03:55.858 11:58:08 -- setup/common.sh@19 -- # local var val 00:03:55.858 11:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.858 11:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.858 11:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.858 11:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.858 11:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.858 11:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109178544 kB' 'MemAvailable: 112375068 kB' 'Buffers: 4132 kB' 'Cached: 10728904 kB' 'SwapCached: 0 kB' 'Active: 7733040 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344376 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498972 kB' 'Mapped: 166632 kB' 'Shmem: 6848724 kB' 'KReclaimable: 273088 kB' 'Slab: 937528 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664440 kB' 'KernelStack: 27200 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8828416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234952 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.858 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.858 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.859 11:58:08 -- setup/common.sh@33 -- # echo 0 00:03:55.859 11:58:08 -- setup/common.sh@33 -- # return 0 00:03:55.859 11:58:08 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.859 11:58:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.859 nr_hugepages=1024 00:03:55.859 11:58:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.859 resv_hugepages=0 00:03:55.859 11:58:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.859 surplus_hugepages=0 00:03:55.859 11:58:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.859 anon_hugepages=0 00:03:55.859 11:58:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.859 11:58:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.859 11:58:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.859 11:58:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.859 11:58:08 -- setup/common.sh@18 -- # local node= 00:03:55.859 11:58:08 -- setup/common.sh@19 -- # local var val 00:03:55.859 11:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.859 11:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.859 11:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.859 11:58:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.859 11:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.859 11:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109179972 kB' 'MemAvailable: 112376496 kB' 'Buffers: 4132 kB' 'Cached: 10728920 kB' 'SwapCached: 0 kB' 'Active: 7733448 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344784 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499500 kB' 'Mapped: 166632 kB' 'Shmem: 6848740 kB' 'KReclaimable: 273088 kB' 'Slab: 937536 kB' 'SReclaimable: 273088 kB' 'SUnreclaim: 664448 kB' 'KernelStack: 27232 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8834124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234952 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.859 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.859 11:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.860 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.860 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.861 11:58:08 -- setup/common.sh@33 -- # echo 1024 00:03:55.861 11:58:08 -- setup/common.sh@33 -- # return 0 00:03:55.861 11:58:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.861 11:58:08 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.861 11:58:08 -- setup/hugepages.sh@27 -- # local node 00:03:55.861 11:58:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.861 11:58:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.861 11:58:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.861 11:58:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.861 11:58:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.861 11:58:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.861 11:58:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.861 11:58:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.861 11:58:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.861 11:58:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.861 11:58:08 -- setup/common.sh@18 -- # local node=0 00:03:55.861 11:58:08 -- setup/common.sh@19 -- # local var val 00:03:55.861 11:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.861 11:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.861 11:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.861 11:58:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.861 11:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.861 11:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60971992 kB' 'MemUsed: 4687016 kB' 'SwapCached: 0 kB' 'Active: 1388748 kB' 'Inactive: 204208 kB' 'Active(anon): 1220692 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1433060 kB' 'Mapped: 96036 kB' 'AnonPages: 163292 kB' 'Shmem: 1060796 kB' 'KernelStack: 14888 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158064 kB' 'Slab: 482952 kB' 'SReclaimable: 158064 kB' 'SUnreclaim: 324888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.861 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.861 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@33 -- # echo 0 00:03:55.862 11:58:08 -- setup/common.sh@33 -- # return 0 00:03:55.862 11:58:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.862 11:58:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.862 11:58:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.862 11:58:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.862 11:58:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.862 11:58:08 -- setup/common.sh@18 -- # local node=1 00:03:55.862 11:58:08 -- setup/common.sh@19 -- # local var val 00:03:55.862 11:58:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.862 11:58:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.862 11:58:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.862 11:58:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.862 11:58:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.862 11:58:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 48212332 kB' 'MemUsed: 12467512 kB' 'SwapCached: 0 kB' 'Active: 6345456 kB' 'Inactive: 3291440 kB' 'Active(anon): 6124848 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9300024 kB' 'Mapped: 70596 kB' 'AnonPages: 336980 kB' 'Shmem: 5787976 kB' 'KernelStack: 12360 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115024 kB' 'Slab: 454604 kB' 'SReclaimable: 115024 kB' 'SUnreclaim: 339580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.862 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.862 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # continue 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.863 11:58:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.863 11:58:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.863 11:58:08 -- setup/common.sh@33 -- # echo 0 00:03:55.863 11:58:08 -- setup/common.sh@33 -- # return 0 00:03:55.863 11:58:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.863 11:58:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.863 11:58:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.863 11:58:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.863 11:58:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.863 node0=512 expecting 512 00:03:55.863 11:58:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.863 11:58:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.863 11:58:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.863 11:58:08 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:55.863 node1=512 expecting 512 00:03:55.863 11:58:08 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.863 00:03:55.863 real 0m3.696s 00:03:55.863 user 0m1.487s 00:03:55.863 sys 0m2.270s 00:03:55.863 11:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.863 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:03:55.863 ************************************ 00:03:55.863 END TEST even_2G_alloc 00:03:55.863 ************************************ 00:03:56.125 11:58:08 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.125 11:58:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:56.125 11:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:56.125 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:03:56.125 ************************************ 00:03:56.125 START TEST odd_alloc 00:03:56.125 ************************************ 00:03:56.125 11:58:08 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:56.125 11:58:08 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.125 11:58:08 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.125 11:58:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.125 11:58:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.125 11:58:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.125 11:58:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.125 11:58:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.125 11:58:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.125 11:58:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.125 11:58:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.125 11:58:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.125 11:58:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.125 11:58:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.125 11:58:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.125 11:58:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.125 11:58:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.125 11:58:08 -- setup/hugepages.sh@83 -- # : 513 00:03:56.125 11:58:08 -- setup/hugepages.sh@84 -- # : 1 00:03:56.125 11:58:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.125 11:58:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:56.125 11:58:08 -- setup/hugepages.sh@83 -- # : 0 00:03:56.125 11:58:08 -- setup/hugepages.sh@84 -- # : 0 00:03:56.125 11:58:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.125 11:58:08 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.125 11:58:08 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.125 11:58:08 -- setup/hugepages.sh@160 -- # setup output 00:03:56.125 11:58:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.125 11:58:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.427 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.427 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.427 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.428 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.428 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.428 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.428 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.428 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.428 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.428 11:58:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:59.428 11:58:12 -- setup/hugepages.sh@89 -- # local node 00:03:59.428 11:58:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.428 11:58:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.428 11:58:12 -- setup/hugepages.sh@92 -- # local surp 00:03:59.428 11:58:12 -- setup/hugepages.sh@93 -- # local resv 00:03:59.428 11:58:12 -- setup/hugepages.sh@94 -- # local anon 00:03:59.428 11:58:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.428 11:58:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.428 11:58:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.428 11:58:12 -- setup/common.sh@18 -- # local node= 00:03:59.428 11:58:12 -- setup/common.sh@19 -- # local var val 00:03:59.428 11:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.428 11:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.428 11:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.428 11:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.428 11:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.428 11:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109216320 kB' 'MemAvailable: 112412784 kB' 'Buffers: 4132 kB' 'Cached: 10729044 kB' 'SwapCached: 0 kB' 'Active: 7733292 kB' 'Inactive: 3495648 kB' 'Active(anon): 7344628 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499588 kB' 'Mapped: 166844 kB' 'Shmem: 6848864 kB' 'KReclaimable: 272968 kB' 'Slab: 937636 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664668 kB' 'KernelStack: 27168 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8834396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235000 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.428 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.428 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.429 11:58:12 -- setup/common.sh@33 -- # echo 0 00:03:59.429 11:58:12 -- setup/common.sh@33 -- # return 0 00:03:59.429 11:58:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.429 11:58:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.429 11:58:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.429 11:58:12 -- setup/common.sh@18 -- # local node= 00:03:59.429 11:58:12 -- setup/common.sh@19 -- # local var val 00:03:59.429 11:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.429 11:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.429 11:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.429 11:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.429 11:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.429 11:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109216684 kB' 'MemAvailable: 112413148 kB' 'Buffers: 4132 kB' 'Cached: 10729044 kB' 'SwapCached: 0 kB' 'Active: 7735260 kB' 'Inactive: 3495648 kB' 'Active(anon): 7346596 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500580 kB' 'Mapped: 167296 kB' 'Shmem: 6848864 kB' 'KReclaimable: 272968 kB' 'Slab: 937600 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664632 kB' 'KernelStack: 27344 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8836160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235080 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.429 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.429 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.694 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.694 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.695 11:58:12 -- setup/common.sh@33 -- # echo 0 00:03:59.695 11:58:12 -- setup/common.sh@33 -- # return 0 00:03:59.695 11:58:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.695 11:58:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.695 11:58:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.695 11:58:12 -- setup/common.sh@18 -- # local node= 00:03:59.695 11:58:12 -- setup/common.sh@19 -- # local var val 00:03:59.695 11:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.695 11:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.695 11:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.695 11:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.695 11:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.695 11:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109217080 kB' 'MemAvailable: 112413544 kB' 'Buffers: 4132 kB' 'Cached: 10729056 kB' 'SwapCached: 0 kB' 'Active: 7738456 kB' 'Inactive: 3495648 kB' 'Active(anon): 7349792 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504768 kB' 'Mapped: 167220 kB' 'Shmem: 6848876 kB' 'KReclaimable: 272968 kB' 'Slab: 937552 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664584 kB' 'KernelStack: 27264 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8840544 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234988 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.695 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.695 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.696 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.696 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.696 11:58:12 -- setup/common.sh@33 -- # echo 0 00:03:59.696 11:58:12 -- setup/common.sh@33 -- # return 0 00:03:59.697 11:58:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.697 11:58:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:59.697 nr_hugepages=1025 00:03:59.697 11:58:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.697 resv_hugepages=0 00:03:59.697 11:58:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.697 surplus_hugepages=0 00:03:59.697 11:58:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.697 anon_hugepages=0 00:03:59.697 11:58:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.697 11:58:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:59.697 11:58:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.697 11:58:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.697 11:58:12 -- setup/common.sh@18 -- # local node= 00:03:59.697 11:58:12 -- setup/common.sh@19 -- # local var val 00:03:59.697 11:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.697 11:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.697 11:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.697 11:58:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.697 11:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.697 11:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109219104 kB' 'MemAvailable: 112415568 kB' 'Buffers: 4132 kB' 'Cached: 10729080 kB' 'SwapCached: 0 kB' 'Active: 7739196 kB' 'Inactive: 3495648 kB' 'Active(anon): 7350532 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505076 kB' 'Mapped: 167624 kB' 'Shmem: 6848900 kB' 'KReclaimable: 272968 kB' 'Slab: 937552 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664584 kB' 'KernelStack: 27360 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8840556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235100 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.697 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.697 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.698 11:58:12 -- setup/common.sh@33 -- # echo 1025 00:03:59.698 11:58:12 -- setup/common.sh@33 -- # return 0 00:03:59.698 11:58:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.698 11:58:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.698 11:58:12 -- setup/hugepages.sh@27 -- # local node 00:03:59.698 11:58:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.698 11:58:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.698 11:58:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.698 11:58:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:59.698 11:58:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.698 11:58:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.698 11:58:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.698 11:58:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.698 11:58:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.698 11:58:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.698 11:58:12 -- setup/common.sh@18 -- # local node=0 00:03:59.698 11:58:12 -- setup/common.sh@19 -- # local var val 00:03:59.698 11:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.698 11:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.698 11:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.698 11:58:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.698 11:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.698 11:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60976456 kB' 'MemUsed: 4682552 kB' 'SwapCached: 0 kB' 'Active: 1386728 kB' 'Inactive: 204208 kB' 'Active(anon): 1218672 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1433104 kB' 'Mapped: 96052 kB' 'AnonPages: 160988 kB' 'Shmem: 1060840 kB' 'KernelStack: 15000 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157960 kB' 'Slab: 482852 kB' 'SReclaimable: 157960 kB' 'SUnreclaim: 324892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.698 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.698 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@33 -- # echo 0 00:03:59.699 11:58:12 -- setup/common.sh@33 -- # return 0 00:03:59.699 11:58:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.699 11:58:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.699 11:58:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.699 11:58:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.699 11:58:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.699 11:58:12 -- setup/common.sh@18 -- # local node=1 00:03:59.699 11:58:12 -- setup/common.sh@19 -- # local var val 00:03:59.699 11:58:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.699 11:58:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.699 11:58:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.699 11:58:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.699 11:58:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.699 11:58:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 48242120 kB' 'MemUsed: 12437724 kB' 'SwapCached: 0 kB' 'Active: 6346564 kB' 'Inactive: 3291440 kB' 'Active(anon): 6125956 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9300116 kB' 'Mapped: 70656 kB' 'AnonPages: 338132 kB' 'Shmem: 5788068 kB' 'KernelStack: 12344 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115008 kB' 'Slab: 454700 kB' 'SReclaimable: 115008 kB' 'SUnreclaim: 339692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.699 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.699 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # continue 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.700 11:58:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.700 11:58:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.700 11:58:12 -- setup/common.sh@33 -- # echo 0 00:03:59.700 11:58:12 -- setup/common.sh@33 -- # return 0 00:03:59.700 11:58:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.700 11:58:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.700 11:58:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.700 11:58:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.700 11:58:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:59.700 node0=512 expecting 513 00:03:59.700 11:58:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.700 11:58:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.700 11:58:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.700 11:58:12 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:59.700 node1=513 expecting 512 00:03:59.700 11:58:12 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:59.700 00:03:59.700 real 0m3.700s 00:03:59.700 user 0m1.511s 00:03:59.700 sys 0m2.255s 00:03:59.700 11:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.700 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:03:59.700 ************************************ 00:03:59.700 END TEST odd_alloc 00:03:59.700 ************************************ 00:03:59.700 11:58:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:59.700 11:58:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.700 11:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.700 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:03:59.700 ************************************ 00:03:59.700 START TEST custom_alloc 00:03:59.700 ************************************ 00:03:59.700 11:58:12 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:59.700 11:58:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:59.701 11:58:12 -- setup/hugepages.sh@169 -- # local node 00:03:59.701 11:58:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:59.701 11:58:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:59.701 11:58:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:59.701 11:58:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:59.701 11:58:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.701 11:58:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.701 11:58:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.701 11:58:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.701 11:58:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.701 11:58:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.701 11:58:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.701 11:58:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.701 11:58:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.701 11:58:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.701 11:58:12 -- setup/hugepages.sh@83 -- # : 256 00:03:59.701 11:58:12 -- setup/hugepages.sh@84 -- # : 1 00:03:59.701 11:58:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.701 11:58:12 -- setup/hugepages.sh@83 -- # : 0 00:03:59.701 11:58:12 -- setup/hugepages.sh@84 -- # : 0 00:03:59.701 11:58:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:59.701 11:58:12 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:59.701 11:58:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.701 11:58:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.701 11:58:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.701 11:58:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.701 11:58:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.701 11:58:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.701 11:58:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.701 11:58:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.701 11:58:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.701 11:58:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.701 11:58:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.701 11:58:12 -- setup/hugepages.sh@78 -- # return 0 00:03:59.701 11:58:12 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:59.701 11:58:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.701 11:58:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.701 11:58:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.701 11:58:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.701 11:58:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:59.701 11:58:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.701 11:58:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.701 11:58:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.701 11:58:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.701 11:58:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.701 11:58:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.701 11:58:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:59.701 11:58:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.701 11:58:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.701 11:58:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.701 11:58:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:59.701 11:58:12 -- setup/hugepages.sh@78 -- # return 0 00:03:59.701 11:58:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:59.701 11:58:12 -- setup/hugepages.sh@187 -- # setup output 00:03:59.701 11:58:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.701 11:58:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.003 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.003 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.003 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.003 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.268 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.268 11:58:16 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:03.268 11:58:16 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.268 11:58:16 -- setup/hugepages.sh@89 -- # local node 00:04:03.268 11:58:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.268 11:58:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.268 11:58:16 -- setup/hugepages.sh@92 -- # local surp 00:04:03.268 11:58:16 -- setup/hugepages.sh@93 -- # local resv 00:04:03.268 11:58:16 -- setup/hugepages.sh@94 -- # local anon 00:04:03.268 11:58:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.268 11:58:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.268 11:58:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.268 11:58:16 -- setup/common.sh@18 -- # local node= 00:04:03.268 11:58:16 -- setup/common.sh@19 -- # local var val 00:04:03.268 11:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.268 11:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.268 11:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.268 11:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.268 11:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.268 11:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 108177640 kB' 'MemAvailable: 111374104 kB' 'Buffers: 4132 kB' 'Cached: 10729192 kB' 'SwapCached: 0 kB' 'Active: 7735496 kB' 'Inactive: 3495648 kB' 'Active(anon): 7346832 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500636 kB' 'Mapped: 166816 kB' 'Shmem: 6849012 kB' 'KReclaimable: 272968 kB' 'Slab: 937380 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664412 kB' 'KernelStack: 27296 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8835624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235160 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.268 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 11:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.269 11:58:16 -- setup/common.sh@33 -- # echo 0 00:04:03.269 11:58:16 -- setup/common.sh@33 -- # return 0 00:04:03.269 11:58:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.269 11:58:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.269 11:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.269 11:58:16 -- setup/common.sh@18 -- # local node= 00:04:03.269 11:58:16 -- setup/common.sh@19 -- # local var val 00:04:03.269 11:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.269 11:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.269 11:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.269 11:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.269 11:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.269 11:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 108177704 kB' 'MemAvailable: 111374168 kB' 'Buffers: 4132 kB' 'Cached: 10729192 kB' 'SwapCached: 0 kB' 'Active: 7735200 kB' 'Inactive: 3495648 kB' 'Active(anon): 7346536 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500384 kB' 'Mapped: 166816 kB' 'Shmem: 6849012 kB' 'KReclaimable: 272968 kB' 'Slab: 937372 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664404 kB' 'KernelStack: 27248 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8835636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235048 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.271 11:58:16 -- setup/common.sh@33 -- # echo 0 00:04:03.271 11:58:16 -- setup/common.sh@33 -- # return 0 00:04:03.271 11:58:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.271 11:58:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.271 11:58:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.271 11:58:16 -- setup/common.sh@18 -- # local node= 00:04:03.271 11:58:16 -- setup/common.sh@19 -- # local var val 00:04:03.271 11:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.271 11:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.271 11:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.271 11:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.271 11:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.271 11:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 108177084 kB' 'MemAvailable: 111373548 kB' 'Buffers: 4132 kB' 'Cached: 10729192 kB' 'SwapCached: 0 kB' 'Active: 7734200 kB' 'Inactive: 3495648 kB' 'Active(anon): 7345536 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499876 kB' 'Mapped: 166700 kB' 'Shmem: 6849012 kB' 'KReclaimable: 272968 kB' 'Slab: 937368 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664400 kB' 'KernelStack: 27344 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8835648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235048 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.271 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.271 11:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.272 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.272 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.272 11:58:16 -- setup/common.sh@33 -- # echo 0 00:04:03.272 11:58:16 -- setup/common.sh@33 -- # return 0 00:04:03.272 11:58:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.272 11:58:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:03.272 nr_hugepages=1536 00:04:03.272 11:58:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.272 resv_hugepages=0 00:04:03.272 11:58:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.272 surplus_hugepages=0 00:04:03.272 11:58:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.272 anon_hugepages=0 00:04:03.272 11:58:16 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.272 11:58:16 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:03.272 11:58:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.273 11:58:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.273 11:58:16 -- setup/common.sh@18 -- # local node= 00:04:03.273 11:58:16 -- setup/common.sh@19 -- # local var val 00:04:03.273 11:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.273 11:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.273 11:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.273 11:58:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.273 11:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.273 11:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 108176476 kB' 'MemAvailable: 111372940 kB' 'Buffers: 4132 kB' 'Cached: 10729192 kB' 'SwapCached: 0 kB' 'Active: 7734500 kB' 'Inactive: 3495648 kB' 'Active(anon): 7345836 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500172 kB' 'Mapped: 166700 kB' 'Shmem: 6849012 kB' 'KReclaimable: 272968 kB' 'Slab: 937368 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664400 kB' 'KernelStack: 27312 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8835664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235096 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.273 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.273 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.274 11:58:16 -- setup/common.sh@33 -- # echo 1536 00:04:03.274 11:58:16 -- setup/common.sh@33 -- # return 0 00:04:03.274 11:58:16 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:03.274 11:58:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.274 11:58:16 -- setup/hugepages.sh@27 -- # local node 00:04:03.274 11:58:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.274 11:58:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.274 11:58:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.274 11:58:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.274 11:58:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.274 11:58:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.274 11:58:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.274 11:58:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.274 11:58:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.274 11:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.274 11:58:16 -- setup/common.sh@18 -- # local node=0 00:04:03.274 11:58:16 -- setup/common.sh@19 -- # local var val 00:04:03.274 11:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.274 11:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.274 11:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.274 11:58:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.274 11:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.274 11:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60973584 kB' 'MemUsed: 4685424 kB' 'SwapCached: 0 kB' 'Active: 1387076 kB' 'Inactive: 204208 kB' 'Active(anon): 1219020 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1433112 kB' 'Mapped: 96064 kB' 'AnonPages: 161348 kB' 'Shmem: 1060848 kB' 'KernelStack: 14920 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157960 kB' 'Slab: 482640 kB' 'SReclaimable: 157960 kB' 'SUnreclaim: 324680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.274 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.274 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.275 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.275 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.537 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 11:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.537 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@33 -- # echo 0 00:04:03.538 11:58:16 -- setup/common.sh@33 -- # return 0 00:04:03.538 11:58:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.538 11:58:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.538 11:58:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.538 11:58:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.538 11:58:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.538 11:58:16 -- setup/common.sh@18 -- # local node=1 00:04:03.538 11:58:16 -- setup/common.sh@19 -- # local var val 00:04:03.538 11:58:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.538 11:58:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.538 11:58:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.538 11:58:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.538 11:58:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.538 11:58:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 47202196 kB' 'MemUsed: 13477648 kB' 'SwapCached: 0 kB' 'Active: 6346664 kB' 'Inactive: 3291440 kB' 'Active(anon): 6126056 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9300264 kB' 'Mapped: 70636 kB' 'AnonPages: 338044 kB' 'Shmem: 5788216 kB' 'KernelStack: 12264 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115008 kB' 'Slab: 454728 kB' 'SReclaimable: 115008 kB' 'SUnreclaim: 339720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # continue 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 11:58:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 11:58:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 11:58:16 -- setup/common.sh@33 -- # echo 0 00:04:03.539 11:58:16 -- setup/common.sh@33 -- # return 0 00:04:03.539 11:58:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.539 11:58:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.539 11:58:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.539 11:58:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.539 11:58:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.539 node0=512 expecting 512 00:04:03.539 11:58:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.539 11:58:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.539 11:58:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.539 11:58:16 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:03.539 node1=1024 expecting 1024 00:04:03.539 11:58:16 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:03.539 00:04:03.539 real 0m3.682s 00:04:03.539 user 0m1.498s 00:04:03.539 sys 0m2.246s 00:04:03.539 11:58:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.539 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.539 ************************************ 00:04:03.539 END TEST custom_alloc 00:04:03.539 ************************************ 00:04:03.539 11:58:16 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.539 11:58:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.539 11:58:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.539 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.539 ************************************ 00:04:03.539 START TEST no_shrink_alloc 00:04:03.539 ************************************ 00:04:03.539 11:58:16 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:03.539 11:58:16 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:03.539 11:58:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.539 11:58:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.539 11:58:16 -- setup/hugepages.sh@51 -- # shift 00:04:03.539 11:58:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.539 11:58:16 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.539 11:58:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.539 11:58:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.539 11:58:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.539 11:58:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.539 11:58:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.539 11:58:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.539 11:58:16 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.539 11:58:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.539 11:58:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.539 11:58:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.539 11:58:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.539 11:58:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.539 11:58:16 -- setup/hugepages.sh@73 -- # return 0 00:04:03.539 11:58:16 -- setup/hugepages.sh@198 -- # setup output 00:04:03.539 11:58:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.539 11:58:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.844 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.844 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.844 11:58:19 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.844 11:58:19 -- setup/hugepages.sh@89 -- # local node 00:04:06.844 11:58:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.844 11:58:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.844 11:58:19 -- setup/hugepages.sh@92 -- # local surp 00:04:06.844 11:58:19 -- setup/hugepages.sh@93 -- # local resv 00:04:06.844 11:58:19 -- setup/hugepages.sh@94 -- # local anon 00:04:06.844 11:58:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.844 11:58:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.844 11:58:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.844 11:58:19 -- setup/common.sh@18 -- # local node= 00:04:06.844 11:58:19 -- setup/common.sh@19 -- # local var val 00:04:06.844 11:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.844 11:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.844 11:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.844 11:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.844 11:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.844 11:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109243716 kB' 'MemAvailable: 112440180 kB' 'Buffers: 4132 kB' 'Cached: 10729336 kB' 'SwapCached: 0 kB' 'Active: 7735184 kB' 'Inactive: 3495648 kB' 'Active(anon): 7346520 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500152 kB' 'Mapped: 166800 kB' 'Shmem: 6849156 kB' 'KReclaimable: 272968 kB' 'Slab: 936972 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664004 kB' 'KernelStack: 27216 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8831492 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235016 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.844 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.844 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.845 11:58:19 -- setup/common.sh@33 -- # echo 0 00:04:06.845 11:58:19 -- setup/common.sh@33 -- # return 0 00:04:06.845 11:58:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.845 11:58:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.845 11:58:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.845 11:58:19 -- setup/common.sh@18 -- # local node= 00:04:06.845 11:58:19 -- setup/common.sh@19 -- # local var val 00:04:06.845 11:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.845 11:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.845 11:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.845 11:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.845 11:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.845 11:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109243808 kB' 'MemAvailable: 112440272 kB' 'Buffers: 4132 kB' 'Cached: 10729340 kB' 'SwapCached: 0 kB' 'Active: 7734868 kB' 'Inactive: 3495648 kB' 'Active(anon): 7346204 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499864 kB' 'Mapped: 166748 kB' 'Shmem: 6849160 kB' 'KReclaimable: 272968 kB' 'Slab: 936972 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 664004 kB' 'KernelStack: 27216 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8831504 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235000 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.845 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.845 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # continue 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.846 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.846 11:58:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.109 11:58:19 -- setup/common.sh@33 -- # echo 0 00:04:07.109 11:58:19 -- setup/common.sh@33 -- # return 0 00:04:07.109 11:58:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.109 11:58:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.109 11:58:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.109 11:58:19 -- setup/common.sh@18 -- # local node= 00:04:07.109 11:58:19 -- setup/common.sh@19 -- # local var val 00:04:07.109 11:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.109 11:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.109 11:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.109 11:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.109 11:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.109 11:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.109 11:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109243412 kB' 'MemAvailable: 112439876 kB' 'Buffers: 4132 kB' 'Cached: 10729340 kB' 'SwapCached: 0 kB' 'Active: 7734380 kB' 'Inactive: 3495648 kB' 'Active(anon): 7345716 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499848 kB' 'Mapped: 166672 kB' 'Shmem: 6849160 kB' 'KReclaimable: 272968 kB' 'Slab: 936956 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 663988 kB' 'KernelStack: 27216 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8831520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235000 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.109 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.109 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.110 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.110 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.111 11:58:19 -- setup/common.sh@33 -- # echo 0 00:04:07.111 11:58:19 -- setup/common.sh@33 -- # return 0 00:04:07.111 11:58:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.111 11:58:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.111 nr_hugepages=1024 00:04:07.111 11:58:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.111 resv_hugepages=0 00:04:07.111 11:58:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.111 surplus_hugepages=0 00:04:07.111 11:58:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.111 anon_hugepages=0 00:04:07.111 11:58:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.111 11:58:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.111 11:58:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.111 11:58:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.111 11:58:19 -- setup/common.sh@18 -- # local node= 00:04:07.111 11:58:19 -- setup/common.sh@19 -- # local var val 00:04:07.111 11:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.111 11:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.111 11:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.111 11:58:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.111 11:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.111 11:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109243688 kB' 'MemAvailable: 112440152 kB' 'Buffers: 4132 kB' 'Cached: 10729376 kB' 'SwapCached: 0 kB' 'Active: 7734056 kB' 'Inactive: 3495648 kB' 'Active(anon): 7345392 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499448 kB' 'Mapped: 166672 kB' 'Shmem: 6849196 kB' 'KReclaimable: 272968 kB' 'Slab: 936956 kB' 'SReclaimable: 272968 kB' 'SUnreclaim: 663988 kB' 'KernelStack: 27200 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8831532 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235000 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.111 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.111 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.112 11:58:19 -- setup/common.sh@33 -- # echo 1024 00:04:07.112 11:58:19 -- setup/common.sh@33 -- # return 0 00:04:07.112 11:58:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.112 11:58:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.112 11:58:19 -- setup/hugepages.sh@27 -- # local node 00:04:07.112 11:58:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.112 11:58:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.112 11:58:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.112 11:58:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.112 11:58:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.112 11:58:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.112 11:58:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.112 11:58:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.112 11:58:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.112 11:58:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.112 11:58:19 -- setup/common.sh@18 -- # local node=0 00:04:07.112 11:58:19 -- setup/common.sh@19 -- # local var val 00:04:07.112 11:58:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.112 11:58:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.112 11:58:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.112 11:58:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.112 11:58:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.112 11:58:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59911924 kB' 'MemUsed: 5747084 kB' 'SwapCached: 0 kB' 'Active: 1386300 kB' 'Inactive: 204208 kB' 'Active(anon): 1218244 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1433112 kB' 'Mapped: 96076 kB' 'AnonPages: 160476 kB' 'Shmem: 1060848 kB' 'KernelStack: 14792 kB' 'PageTables: 3252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157960 kB' 'Slab: 482320 kB' 'SReclaimable: 157960 kB' 'SUnreclaim: 324360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.112 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.112 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # continue 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.113 11:58:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.113 11:58:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.113 11:58:19 -- setup/common.sh@33 -- # echo 0 00:04:07.113 11:58:19 -- setup/common.sh@33 -- # return 0 00:04:07.113 11:58:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.113 11:58:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.113 11:58:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.113 11:58:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.113 11:58:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.113 node0=1024 expecting 1024 00:04:07.113 11:58:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.113 11:58:19 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:07.113 11:58:19 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:07.113 11:58:19 -- setup/hugepages.sh@202 -- # setup output 00:04:07.113 11:58:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.113 11:58:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.505 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.505 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.505 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:10.505 11:58:23 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:10.505 11:58:23 -- setup/hugepages.sh@89 -- # local node 00:04:10.505 11:58:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.505 11:58:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.505 11:58:23 -- setup/hugepages.sh@92 -- # local surp 00:04:10.505 11:58:23 -- setup/hugepages.sh@93 -- # local resv 00:04:10.505 11:58:23 -- setup/hugepages.sh@94 -- # local anon 00:04:10.505 11:58:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.505 11:58:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.505 11:58:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.505 11:58:23 -- setup/common.sh@18 -- # local node= 00:04:10.505 11:58:23 -- setup/common.sh@19 -- # local var val 00:04:10.505 11:58:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.505 11:58:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.505 11:58:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.505 11:58:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.505 11:58:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.505 11:58:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.505 11:58:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109271580 kB' 'MemAvailable: 112467972 kB' 'Buffers: 4132 kB' 'Cached: 10729468 kB' 'SwapCached: 0 kB' 'Active: 7736048 kB' 'Inactive: 3495648 kB' 'Active(anon): 7347384 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500860 kB' 'Mapped: 166820 kB' 'Shmem: 6849288 kB' 'KReclaimable: 272824 kB' 'Slab: 937172 kB' 'SReclaimable: 272824 kB' 'SUnreclaim: 664348 kB' 'KernelStack: 27120 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8832268 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234936 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.505 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.505 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.506 11:58:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.506 11:58:23 -- setup/common.sh@33 -- # echo 0 00:04:10.506 11:58:23 -- setup/common.sh@33 -- # return 0 00:04:10.506 11:58:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.506 11:58:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.506 11:58:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.506 11:58:23 -- setup/common.sh@18 -- # local node= 00:04:10.506 11:58:23 -- setup/common.sh@19 -- # local var val 00:04:10.506 11:58:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.506 11:58:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.506 11:58:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.506 11:58:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.506 11:58:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.506 11:58:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.506 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109272220 kB' 'MemAvailable: 112468612 kB' 'Buffers: 4132 kB' 'Cached: 10729472 kB' 'SwapCached: 0 kB' 'Active: 7735200 kB' 'Inactive: 3495648 kB' 'Active(anon): 7346536 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500544 kB' 'Mapped: 166692 kB' 'Shmem: 6849292 kB' 'KReclaimable: 272824 kB' 'Slab: 937196 kB' 'SReclaimable: 272824 kB' 'SUnreclaim: 664372 kB' 'KernelStack: 27120 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8832692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234904 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.507 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.507 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.508 11:58:23 -- setup/common.sh@33 -- # echo 0 00:04:10.508 11:58:23 -- setup/common.sh@33 -- # return 0 00:04:10.508 11:58:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.508 11:58:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.508 11:58:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.508 11:58:23 -- setup/common.sh@18 -- # local node= 00:04:10.508 11:58:23 -- setup/common.sh@19 -- # local var val 00:04:10.508 11:58:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.508 11:58:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.508 11:58:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.508 11:58:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.508 11:58:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.508 11:58:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.508 11:58:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109268440 kB' 'MemAvailable: 112464832 kB' 'Buffers: 4132 kB' 'Cached: 10729480 kB' 'SwapCached: 0 kB' 'Active: 7738080 kB' 'Inactive: 3495648 kB' 'Active(anon): 7349416 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503384 kB' 'Mapped: 167196 kB' 'Shmem: 6849300 kB' 'KReclaimable: 272824 kB' 'Slab: 937196 kB' 'SReclaimable: 272824 kB' 'SUnreclaim: 664372 kB' 'KernelStack: 27104 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8836160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234872 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.508 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.508 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.509 11:58:23 -- setup/common.sh@33 -- # echo 0 00:04:10.509 11:58:23 -- setup/common.sh@33 -- # return 0 00:04:10.509 11:58:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.509 11:58:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.509 nr_hugepages=1024 00:04:10.509 11:58:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.509 resv_hugepages=0 00:04:10.509 11:58:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.509 surplus_hugepages=0 00:04:10.509 11:58:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.509 anon_hugepages=0 00:04:10.509 11:58:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.509 11:58:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.509 11:58:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.509 11:58:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.509 11:58:23 -- setup/common.sh@18 -- # local node= 00:04:10.509 11:58:23 -- setup/common.sh@19 -- # local var val 00:04:10.509 11:58:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.509 11:58:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.509 11:58:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.509 11:58:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.509 11:58:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.509 11:58:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.509 11:58:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109264408 kB' 'MemAvailable: 112460800 kB' 'Buffers: 4132 kB' 'Cached: 10729492 kB' 'SwapCached: 0 kB' 'Active: 7740708 kB' 'Inactive: 3495648 kB' 'Active(anon): 7352044 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506060 kB' 'Mapped: 167544 kB' 'Shmem: 6849312 kB' 'KReclaimable: 272824 kB' 'Slab: 937196 kB' 'SReclaimable: 272824 kB' 'SUnreclaim: 664372 kB' 'KernelStack: 27104 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8838428 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234876 kB' 'VmallocChunk: 0 kB' 'Percpu: 97344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2729332 kB' 'DirectMap2M: 21067776 kB' 'DirectMap1G: 112197632 kB' 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.509 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.509 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.510 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.510 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.511 11:58:23 -- setup/common.sh@33 -- # echo 1024 00:04:10.511 11:58:23 -- setup/common.sh@33 -- # return 0 00:04:10.511 11:58:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.511 11:58:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.511 11:58:23 -- setup/hugepages.sh@27 -- # local node 00:04:10.511 11:58:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.511 11:58:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.511 11:58:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.511 11:58:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.511 11:58:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.511 11:58:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.511 11:58:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.511 11:58:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.511 11:58:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.511 11:58:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.511 11:58:23 -- setup/common.sh@18 -- # local node=0 00:04:10.511 11:58:23 -- setup/common.sh@19 -- # local var val 00:04:10.511 11:58:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.511 11:58:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.511 11:58:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.511 11:58:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.511 11:58:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.511 11:58:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59915988 kB' 'MemUsed: 5743020 kB' 'SwapCached: 0 kB' 'Active: 1385944 kB' 'Inactive: 204208 kB' 'Active(anon): 1217888 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1433112 kB' 'Mapped: 96508 kB' 'AnonPages: 160120 kB' 'Shmem: 1060848 kB' 'KernelStack: 14728 kB' 'PageTables: 3204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157816 kB' 'Slab: 482272 kB' 'SReclaimable: 157816 kB' 'SUnreclaim: 324456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.511 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.511 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.512 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.512 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # continue 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 11:58:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 11:58:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.772 11:58:23 -- setup/common.sh@33 -- # echo 0 00:04:10.772 11:58:23 -- setup/common.sh@33 -- # return 0 00:04:10.772 11:58:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.772 11:58:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.772 11:58:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.772 11:58:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.772 11:58:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.772 node0=1024 expecting 1024 00:04:10.772 11:58:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.772 00:04:10.772 real 0m7.159s 00:04:10.772 user 0m2.862s 00:04:10.772 sys 0m4.423s 00:04:10.772 11:58:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.772 11:58:23 -- common/autotest_common.sh@10 -- # set +x 00:04:10.772 ************************************ 00:04:10.772 END TEST no_shrink_alloc 00:04:10.772 ************************************ 00:04:10.772 11:58:23 -- setup/hugepages.sh@217 -- # clear_hp 00:04:10.772 11:58:23 -- setup/hugepages.sh@37 -- # local node hp 00:04:10.772 11:58:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.772 11:58:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.772 11:58:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.772 11:58:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.772 11:58:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.772 11:58:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.772 11:58:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.772 11:58:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.772 11:58:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.772 11:58:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:10.772 11:58:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:10.772 11:58:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:10.772 00:04:10.772 real 0m26.109s 00:04:10.772 user 0m10.461s 00:04:10.772 sys 0m16.069s 00:04:10.772 11:58:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.772 11:58:23 -- common/autotest_common.sh@10 -- # set +x 00:04:10.772 ************************************ 00:04:10.772 END TEST hugepages 00:04:10.772 ************************************ 00:04:10.772 11:58:23 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.772 11:58:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.772 11:58:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.772 11:58:23 -- common/autotest_common.sh@10 -- # set +x 00:04:10.772 ************************************ 00:04:10.772 START TEST driver 00:04:10.772 ************************************ 00:04:10.772 11:58:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.772 * Looking for test storage... 00:04:10.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.772 11:58:23 -- setup/driver.sh@68 -- # setup reset 00:04:10.772 11:58:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.772 11:58:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.090 11:58:28 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:16.090 11:58:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.090 11:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.090 11:58:28 -- common/autotest_common.sh@10 -- # set +x 00:04:16.090 ************************************ 00:04:16.090 START TEST guess_driver 00:04:16.090 ************************************ 00:04:16.090 11:58:28 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:16.090 11:58:28 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:16.090 11:58:28 -- setup/driver.sh@47 -- # local fail=0 00:04:16.090 11:58:28 -- setup/driver.sh@49 -- # pick_driver 00:04:16.090 11:58:28 -- setup/driver.sh@36 -- # vfio 00:04:16.090 11:58:28 -- setup/driver.sh@21 -- # local iommu_grups 00:04:16.090 11:58:28 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:16.090 11:58:28 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:16.090 11:58:28 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:16.090 11:58:28 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:16.090 11:58:28 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:04:16.090 11:58:28 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:16.090 11:58:28 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:16.090 11:58:28 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:16.090 11:58:28 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:16.090 11:58:28 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:16.090 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.090 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.090 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:16.090 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:16.090 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:16.090 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:16.090 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:16.090 11:58:28 -- setup/driver.sh@30 -- # return 0 00:04:16.090 11:58:28 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:16.090 11:58:28 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:16.090 11:58:28 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:16.090 11:58:28 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:16.090 Looking for driver=vfio-pci 00:04:16.090 11:58:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.090 11:58:28 -- setup/driver.sh@45 -- # setup output config 00:04:16.090 11:58:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.090 11:58:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:32 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:32 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.387 11:58:32 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.387 11:58:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.387 11:58:32 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:19.387 11:58:32 -- setup/driver.sh@65 -- # setup reset 00:04:19.387 11:58:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.387 11:58:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.672 00:04:24.672 real 0m8.242s 00:04:24.672 user 0m2.610s 00:04:24.672 sys 0m4.821s 00:04:24.672 11:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.672 11:58:36 -- common/autotest_common.sh@10 -- # set +x 00:04:24.672 ************************************ 00:04:24.672 END TEST guess_driver 00:04:24.672 ************************************ 00:04:24.672 00:04:24.672 real 0m13.096s 00:04:24.672 user 0m4.082s 00:04:24.672 sys 0m7.484s 00:04:24.672 11:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.672 11:58:36 -- common/autotest_common.sh@10 -- # set +x 00:04:24.672 ************************************ 00:04:24.672 END TEST driver 00:04:24.672 ************************************ 00:04:24.672 11:58:36 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.672 11:58:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.672 11:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.672 11:58:36 -- common/autotest_common.sh@10 -- # set +x 00:04:24.672 ************************************ 00:04:24.672 START TEST devices 00:04:24.672 ************************************ 00:04:24.672 11:58:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.672 * Looking for test storage... 00:04:24.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.672 11:58:36 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:24.672 11:58:36 -- setup/devices.sh@192 -- # setup reset 00:04:24.672 11:58:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.672 11:58:36 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.973 11:58:40 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:27.973 11:58:40 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:27.973 11:58:40 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:27.973 11:58:40 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:27.974 11:58:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:27.974 11:58:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:27.974 11:58:40 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:27.974 11:58:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.974 11:58:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:27.974 11:58:40 -- setup/devices.sh@196 -- # blocks=() 00:04:27.974 11:58:40 -- setup/devices.sh@196 -- # declare -a blocks 00:04:27.974 11:58:40 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:27.974 11:58:40 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:27.974 11:58:40 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:27.974 11:58:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.974 11:58:40 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:27.974 11:58:40 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:27.974 11:58:40 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:27.974 11:58:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:27.974 11:58:40 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:27.974 11:58:40 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:27.974 11:58:40 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:27.974 No valid GPT data, bailing 00:04:27.974 11:58:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.974 11:58:40 -- scripts/common.sh@393 -- # pt= 00:04:27.974 11:58:40 -- scripts/common.sh@394 -- # return 1 00:04:27.974 11:58:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:27.974 11:58:40 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:27.974 11:58:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:27.974 11:58:40 -- setup/common.sh@80 -- # echo 1920383410176 00:04:27.974 11:58:40 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:27.974 11:58:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.974 11:58:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:27.974 11:58:40 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:27.974 11:58:40 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:27.974 11:58:40 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:27.974 11:58:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.974 11:58:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.974 11:58:40 -- common/autotest_common.sh@10 -- # set +x 00:04:27.974 ************************************ 00:04:27.974 START TEST nvme_mount 00:04:27.974 ************************************ 00:04:27.974 11:58:40 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:27.974 11:58:40 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:27.974 11:58:40 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:27.974 11:58:40 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.974 11:58:40 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.974 11:58:40 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:27.974 11:58:40 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.974 11:58:40 -- setup/common.sh@40 -- # local part_no=1 00:04:27.974 11:58:40 -- setup/common.sh@41 -- # local size=1073741824 00:04:27.974 11:58:40 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.974 11:58:40 -- setup/common.sh@44 -- # parts=() 00:04:27.974 11:58:40 -- setup/common.sh@44 -- # local parts 00:04:27.974 11:58:40 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.974 11:58:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.974 11:58:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.974 11:58:40 -- setup/common.sh@46 -- # (( part++ )) 00:04:27.974 11:58:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.974 11:58:40 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.974 11:58:40 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.974 11:58:40 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:28.915 Creating new GPT entries in memory. 00:04:28.915 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.915 other utilities. 00:04:28.915 11:58:41 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.915 11:58:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.915 11:58:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.915 11:58:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.915 11:58:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:29.855 Creating new GPT entries in memory. 00:04:29.855 The operation has completed successfully. 00:04:29.855 11:58:42 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.855 11:58:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.855 11:58:42 -- setup/common.sh@62 -- # wait 1243157 00:04:29.855 11:58:42 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.855 11:58:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:29.855 11:58:42 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.855 11:58:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:29.855 11:58:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:29.855 11:58:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.116 11:58:42 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.116 11:58:42 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:30.116 11:58:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:30.116 11:58:42 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.116 11:58:42 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.116 11:58:42 -- setup/devices.sh@53 -- # local found=0 00:04:30.116 11:58:42 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.116 11:58:42 -- setup/devices.sh@56 -- # : 00:04:30.116 11:58:42 -- setup/devices.sh@59 -- # local pci status 00:04:30.116 11:58:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.116 11:58:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:30.116 11:58:42 -- setup/devices.sh@47 -- # setup output config 00:04:30.116 11:58:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.116 11:58:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:33.422 11:58:46 -- setup/devices.sh@63 -- # found=1 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.422 11:58:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.422 11:58:46 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:33.422 11:58:46 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.422 11:58:46 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.422 11:58:46 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.422 11:58:46 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:33.422 11:58:46 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.422 11:58:46 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.422 11:58:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.422 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.422 11:58:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.422 11:58:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.683 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:33.683 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:33.683 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.683 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.683 11:58:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:33.683 11:58:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:33.683 11:58:46 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.683 11:58:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:33.683 11:58:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:33.683 11:58:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.683 11:58:46 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.683 11:58:46 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:33.683 11:58:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:33.683 11:58:46 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.683 11:58:46 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.683 11:58:46 -- setup/devices.sh@53 -- # local found=0 00:04:33.683 11:58:46 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.683 11:58:46 -- setup/devices.sh@56 -- # : 00:04:33.683 11:58:46 -- setup/devices.sh@59 -- # local pci status 00:04:33.683 11:58:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.683 11:58:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:33.683 11:58:46 -- setup/devices.sh@47 -- # setup output config 00:04:33.683 11:58:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.683 11:58:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.989 11:58:49 -- setup/devices.sh@63 -- # found=1 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.989 11:58:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.989 11:58:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.251 11:58:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.251 11:58:50 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:37.251 11:58:50 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.251 11:58:50 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.251 11:58:50 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.251 11:58:50 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.251 11:58:50 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:37.251 11:58:50 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:37.251 11:58:50 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:37.251 11:58:50 -- setup/devices.sh@50 -- # local mount_point= 00:04:37.251 11:58:50 -- setup/devices.sh@51 -- # local test_file= 00:04:37.251 11:58:50 -- setup/devices.sh@53 -- # local found=0 00:04:37.251 11:58:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:37.251 11:58:50 -- setup/devices.sh@59 -- # local pci status 00:04:37.251 11:58:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:37.251 11:58:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.251 11:58:50 -- setup/devices.sh@47 -- # setup output config 00:04:37.251 11:58:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.251 11:58:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:40.555 11:58:53 -- setup/devices.sh@63 -- # found=1 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.555 11:58:53 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.555 11:58:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.817 11:58:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.817 11:58:53 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.817 11:58:53 -- setup/devices.sh@68 -- # return 0 00:04:40.817 11:58:53 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.817 11:58:53 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.817 11:58:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.817 11:58:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.817 11:58:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.817 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.817 00:04:40.817 real 0m12.927s 00:04:40.817 user 0m4.048s 00:04:40.817 sys 0m6.794s 00:04:40.817 11:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.817 11:58:53 -- common/autotest_common.sh@10 -- # set +x 00:04:40.817 ************************************ 00:04:40.817 END TEST nvme_mount 00:04:40.817 ************************************ 00:04:40.817 11:58:53 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.817 11:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.817 11:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.817 11:58:53 -- common/autotest_common.sh@10 -- # set +x 00:04:40.817 ************************************ 00:04:40.817 START TEST dm_mount 00:04:40.817 ************************************ 00:04:40.817 11:58:53 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:40.817 11:58:53 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.817 11:58:53 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.817 11:58:53 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.817 11:58:53 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.817 11:58:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.817 11:58:53 -- setup/common.sh@40 -- # local part_no=2 00:04:40.817 11:58:53 -- setup/common.sh@41 -- # local size=1073741824 00:04:40.817 11:58:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.817 11:58:53 -- setup/common.sh@44 -- # parts=() 00:04:40.817 11:58:53 -- setup/common.sh@44 -- # local parts 00:04:40.817 11:58:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.817 11:58:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.817 11:58:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.817 11:58:53 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.817 11:58:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.817 11:58:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.817 11:58:53 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.817 11:58:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.817 11:58:53 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.817 11:58:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.817 11:58:53 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.760 Creating new GPT entries in memory. 00:04:41.760 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.760 other utilities. 00:04:41.760 11:58:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.760 11:58:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.760 11:58:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.760 11:58:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.760 11:58:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.144 Creating new GPT entries in memory. 00:04:43.144 The operation has completed successfully. 00:04:43.145 11:58:55 -- setup/common.sh@57 -- # (( part++ )) 00:04:43.145 11:58:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.145 11:58:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.145 11:58:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.145 11:58:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:44.123 The operation has completed successfully. 00:04:44.123 11:58:56 -- setup/common.sh@57 -- # (( part++ )) 00:04:44.123 11:58:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.123 11:58:56 -- setup/common.sh@62 -- # wait 1248283 00:04:44.123 11:58:56 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:44.123 11:58:56 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.123 11:58:56 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.124 11:58:56 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:44.124 11:58:56 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:44.124 11:58:56 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:44.124 11:58:56 -- setup/devices.sh@161 -- # break 00:04:44.124 11:58:56 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:44.124 11:58:56 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:44.124 11:58:56 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:44.124 11:58:56 -- setup/devices.sh@166 -- # dm=dm-1 00:04:44.124 11:58:56 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:44.124 11:58:56 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:44.124 11:58:56 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.124 11:58:56 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:44.124 11:58:56 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.124 11:58:56 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:44.124 11:58:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:44.124 11:58:56 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.124 11:58:56 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.124 11:58:56 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:44.124 11:58:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:44.124 11:58:56 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.124 11:58:56 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.124 11:58:56 -- setup/devices.sh@53 -- # local found=0 00:04:44.124 11:58:56 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.124 11:58:56 -- setup/devices.sh@56 -- # : 00:04:44.124 11:58:56 -- setup/devices.sh@59 -- # local pci status 00:04:44.124 11:58:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.124 11:58:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:44.124 11:58:56 -- setup/devices.sh@47 -- # setup output config 00:04:44.124 11:58:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.124 11:58:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:47.424 11:59:00 -- setup/devices.sh@63 -- # found=1 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.424 11:59:00 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.424 11:59:00 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.424 11:59:00 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.424 11:59:00 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.424 11:59:00 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.424 11:59:00 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:47.424 11:59:00 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:47.424 11:59:00 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:47.424 11:59:00 -- setup/devices.sh@50 -- # local mount_point= 00:04:47.424 11:59:00 -- setup/devices.sh@51 -- # local test_file= 00:04:47.424 11:59:00 -- setup/devices.sh@53 -- # local found=0 00:04:47.424 11:59:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.424 11:59:00 -- setup/devices.sh@59 -- # local pci status 00:04:47.424 11:59:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.424 11:59:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:47.424 11:59:00 -- setup/devices.sh@47 -- # setup output config 00:04:47.424 11:59:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.424 11:59:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:50.726 11:59:03 -- setup/devices.sh@63 -- # found=1 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.726 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.726 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.727 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.727 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.727 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.727 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.727 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.727 11:59:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.727 11:59:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.988 11:59:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.988 11:59:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.988 11:59:03 -- setup/devices.sh@68 -- # return 0 00:04:50.988 11:59:03 -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.988 11:59:03 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.988 11:59:03 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.988 11:59:03 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.988 11:59:03 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.988 11:59:03 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.988 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.988 11:59:03 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.988 11:59:03 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.988 00:04:50.988 real 0m10.143s 00:04:50.988 user 0m2.652s 00:04:50.988 sys 0m4.558s 00:04:50.988 11:59:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.988 11:59:03 -- common/autotest_common.sh@10 -- # set +x 00:04:50.988 ************************************ 00:04:50.988 END TEST dm_mount 00:04:50.988 ************************************ 00:04:50.988 11:59:03 -- setup/devices.sh@1 -- # cleanup 00:04:50.988 11:59:03 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.988 11:59:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.988 11:59:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.988 11:59:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.988 11:59:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.988 11:59:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.249 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:51.249 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:51.249 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.249 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.249 11:59:04 -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.249 11:59:04 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.249 11:59:04 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.249 11:59:04 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.249 11:59:04 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.249 11:59:04 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.249 11:59:04 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.249 00:04:51.249 real 0m27.451s 00:04:51.249 user 0m8.178s 00:04:51.249 sys 0m14.133s 00:04:51.249 11:59:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.249 11:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.249 ************************************ 00:04:51.249 END TEST devices 00:04:51.249 ************************************ 00:04:51.249 00:04:51.249 real 1m31.458s 00:04:51.249 user 0m30.716s 00:04:51.249 sys 0m52.261s 00:04:51.249 11:59:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.249 11:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.249 ************************************ 00:04:51.249 END TEST setup.sh 00:04:51.249 ************************************ 00:04:51.511 11:59:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.855 Hugepages 00:04:54.855 node hugesize free / total 00:04:54.855 node0 1048576kB 0 / 0 00:04:54.855 node0 2048kB 2048 / 2048 00:04:54.855 node1 1048576kB 0 / 0 00:04:54.855 node1 2048kB 0 / 0 00:04:54.855 00:04:54.855 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.855 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:54.855 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:54.855 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:54.855 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:54.855 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:54.855 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:54.855 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:54.855 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:54.855 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:54.855 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:54.855 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:54.855 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:54.855 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:54.855 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:54.856 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:54.856 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:54.856 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:54.856 11:59:07 -- spdk/autotest.sh@141 -- # uname -s 00:04:54.856 11:59:07 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:54.856 11:59:07 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:54.856 11:59:07 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.178 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.178 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.091 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:00.091 11:59:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:01.033 11:59:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:01.033 11:59:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:01.033 11:59:13 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.034 11:59:13 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:01.034 11:59:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:01.034 11:59:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:01.034 11:59:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.034 11:59:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.034 11:59:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:01.034 11:59:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:01.034 11:59:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:01.034 11:59:13 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.344 Waiting for block devices as requested 00:05:04.344 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:04.344 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:04.344 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:04.344 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:04.604 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:04.604 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:04.604 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:04.865 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:04.865 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:04.865 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:05.126 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:05.126 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:05.126 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:05.126 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.387 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.387 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.387 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:05.387 11:59:18 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:05.387 11:59:18 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:05.387 11:59:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:05.387 11:59:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:05.387 11:59:18 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:05.387 11:59:18 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:05.387 11:59:18 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:05.387 11:59:18 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:05.387 11:59:18 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:05.387 11:59:18 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:05.387 11:59:18 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:05.387 11:59:18 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:05.387 11:59:18 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:05.387 11:59:18 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:05.387 11:59:18 -- common/autotest_common.sh@1542 -- # continue 00:05:05.387 11:59:18 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:05.387 11:59:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:05.387 11:59:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.647 11:59:18 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:05.647 11:59:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:05.647 11:59:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.647 11:59:18 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.949 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.949 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.210 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.210 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:09.210 11:59:22 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:09.210 11:59:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:09.210 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.210 11:59:22 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:09.210 11:59:22 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:09.210 11:59:22 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.210 11:59:22 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:09.210 11:59:22 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:09.210 11:59:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:09.210 11:59:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:09.210 11:59:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:09.210 11:59:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.210 11:59:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.210 11:59:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:09.471 11:59:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:09.471 11:59:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:09.471 11:59:22 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:09.471 11:59:22 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:09.471 11:59:22 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:09.471 11:59:22 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:09.471 11:59:22 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:09.471 11:59:22 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:09.471 11:59:22 -- common/autotest_common.sh@1578 -- # return 0 00:05:09.471 11:59:22 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:09.471 11:59:22 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:09.471 11:59:22 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:09.471 11:59:22 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:09.471 11:59:22 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:09.471 11:59:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:09.471 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.471 11:59:22 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:09.471 11:59:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.471 11:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.471 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.471 ************************************ 00:05:09.471 START TEST env 00:05:09.471 ************************************ 00:05:09.471 11:59:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:09.471 * Looking for test storage... 00:05:09.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:09.471 11:59:22 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.471 11:59:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.471 11:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.471 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.471 ************************************ 00:05:09.471 START TEST env_memory 00:05:09.471 ************************************ 00:05:09.471 11:59:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.471 00:05:09.471 00:05:09.471 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.471 http://cunit.sourceforge.net/ 00:05:09.471 00:05:09.471 00:05:09.471 Suite: memory 00:05:09.471 Test: alloc and free memory map ...[2024-06-11 11:59:22.429780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.471 passed 00:05:09.471 Test: mem map translation ...[2024-06-11 11:59:22.455354] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.471 [2024-06-11 11:59:22.455383] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.471 [2024-06-11 11:59:22.455432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.471 [2024-06-11 11:59:22.455442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.471 passed 00:05:09.734 Test: mem map registration ...[2024-06-11 11:59:22.510658] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:09.734 [2024-06-11 11:59:22.510681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:09.734 passed 00:05:09.734 Test: mem map adjacent registrations ...passed 00:05:09.734 00:05:09.734 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.734 suites 1 1 n/a 0 0 00:05:09.734 tests 4 4 4 0 0 00:05:09.734 asserts 152 152 152 0 n/a 00:05:09.734 00:05:09.734 Elapsed time = 0.194 seconds 00:05:09.734 00:05:09.734 real 0m0.207s 00:05:09.734 user 0m0.197s 00:05:09.734 sys 0m0.010s 00:05:09.734 11:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.734 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 ************************************ 00:05:09.734 END TEST env_memory 00:05:09.734 ************************************ 00:05:09.734 11:59:22 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.734 11:59:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.734 11:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.734 11:59:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.734 ************************************ 00:05:09.734 START TEST env_vtophys 00:05:09.734 ************************************ 00:05:09.734 11:59:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.734 EAL: lib.eal log level changed from notice to debug 00:05:09.734 EAL: Detected lcore 0 as core 0 on socket 0 00:05:09.734 EAL: Detected lcore 1 as core 1 on socket 0 00:05:09.734 EAL: Detected lcore 2 as core 2 on socket 0 00:05:09.734 EAL: Detected lcore 3 as core 3 on socket 0 00:05:09.734 EAL: Detected lcore 4 as core 4 on socket 0 00:05:09.734 EAL: Detected lcore 5 as core 5 on socket 0 00:05:09.734 EAL: Detected lcore 6 as core 6 on socket 0 00:05:09.734 EAL: Detected lcore 7 as core 7 on socket 0 00:05:09.734 EAL: Detected lcore 8 as core 8 on socket 0 00:05:09.734 EAL: Detected lcore 9 as core 9 on socket 0 00:05:09.734 EAL: Detected lcore 10 as core 10 on socket 0 00:05:09.734 EAL: Detected lcore 11 as core 11 on socket 0 00:05:09.734 EAL: Detected lcore 12 as core 12 on socket 0 00:05:09.734 EAL: Detected lcore 13 as core 13 on socket 0 00:05:09.734 EAL: Detected lcore 14 as core 14 on socket 0 00:05:09.734 EAL: Detected lcore 15 as core 15 on socket 0 00:05:09.734 EAL: Detected lcore 16 as core 16 on socket 0 00:05:09.734 EAL: Detected lcore 17 as core 17 on socket 0 00:05:09.734 EAL: Detected lcore 18 as core 18 on socket 0 00:05:09.734 EAL: Detected lcore 19 as core 19 on socket 0 00:05:09.734 EAL: Detected lcore 20 as core 20 on socket 0 00:05:09.734 EAL: Detected lcore 21 as core 21 on socket 0 00:05:09.734 EAL: Detected lcore 22 as core 22 on socket 0 00:05:09.734 EAL: Detected lcore 23 as core 23 on socket 0 00:05:09.734 EAL: Detected lcore 24 as core 24 on socket 0 00:05:09.734 EAL: Detected lcore 25 as core 25 on socket 0 00:05:09.734 EAL: Detected lcore 26 as core 26 on socket 0 00:05:09.734 EAL: Detected lcore 27 as core 27 on socket 0 00:05:09.734 EAL: Detected lcore 28 as core 28 on socket 0 00:05:09.734 EAL: Detected lcore 29 as core 29 on socket 0 00:05:09.734 EAL: Detected lcore 30 as core 30 on socket 0 00:05:09.734 EAL: Detected lcore 31 as core 31 on socket 0 00:05:09.734 EAL: Detected lcore 32 as core 32 on socket 0 00:05:09.734 EAL: Detected lcore 33 as core 33 on socket 0 00:05:09.734 EAL: Detected lcore 34 as core 34 on socket 0 00:05:09.734 EAL: Detected lcore 35 as core 35 on socket 0 00:05:09.734 EAL: Detected lcore 36 as core 0 on socket 1 00:05:09.734 EAL: Detected lcore 37 as core 1 on socket 1 00:05:09.734 EAL: Detected lcore 38 as core 2 on socket 1 00:05:09.734 EAL: Detected lcore 39 as core 3 on socket 1 00:05:09.734 EAL: Detected lcore 40 as core 4 on socket 1 00:05:09.734 EAL: Detected lcore 41 as core 5 on socket 1 00:05:09.734 EAL: Detected lcore 42 as core 6 on socket 1 00:05:09.734 EAL: Detected lcore 43 as core 7 on socket 1 00:05:09.734 EAL: Detected lcore 44 as core 8 on socket 1 00:05:09.734 EAL: Detected lcore 45 as core 9 on socket 1 00:05:09.734 EAL: Detected lcore 46 as core 10 on socket 1 00:05:09.734 EAL: Detected lcore 47 as core 11 on socket 1 00:05:09.734 EAL: Detected lcore 48 as core 12 on socket 1 00:05:09.734 EAL: Detected lcore 49 as core 13 on socket 1 00:05:09.734 EAL: Detected lcore 50 as core 14 on socket 1 00:05:09.734 EAL: Detected lcore 51 as core 15 on socket 1 00:05:09.734 EAL: Detected lcore 52 as core 16 on socket 1 00:05:09.734 EAL: Detected lcore 53 as core 17 on socket 1 00:05:09.734 EAL: Detected lcore 54 as core 18 on socket 1 00:05:09.734 EAL: Detected lcore 55 as core 19 on socket 1 00:05:09.734 EAL: Detected lcore 56 as core 20 on socket 1 00:05:09.734 EAL: Detected lcore 57 as core 21 on socket 1 00:05:09.734 EAL: Detected lcore 58 as core 22 on socket 1 00:05:09.734 EAL: Detected lcore 59 as core 23 on socket 1 00:05:09.734 EAL: Detected lcore 60 as core 24 on socket 1 00:05:09.734 EAL: Detected lcore 61 as core 25 on socket 1 00:05:09.734 EAL: Detected lcore 62 as core 26 on socket 1 00:05:09.734 EAL: Detected lcore 63 as core 27 on socket 1 00:05:09.734 EAL: Detected lcore 64 as core 28 on socket 1 00:05:09.734 EAL: Detected lcore 65 as core 29 on socket 1 00:05:09.734 EAL: Detected lcore 66 as core 30 on socket 1 00:05:09.734 EAL: Detected lcore 67 as core 31 on socket 1 00:05:09.734 EAL: Detected lcore 68 as core 32 on socket 1 00:05:09.734 EAL: Detected lcore 69 as core 33 on socket 1 00:05:09.734 EAL: Detected lcore 70 as core 34 on socket 1 00:05:09.734 EAL: Detected lcore 71 as core 35 on socket 1 00:05:09.734 EAL: Detected lcore 72 as core 0 on socket 0 00:05:09.734 EAL: Detected lcore 73 as core 1 on socket 0 00:05:09.734 EAL: Detected lcore 74 as core 2 on socket 0 00:05:09.734 EAL: Detected lcore 75 as core 3 on socket 0 00:05:09.734 EAL: Detected lcore 76 as core 4 on socket 0 00:05:09.734 EAL: Detected lcore 77 as core 5 on socket 0 00:05:09.734 EAL: Detected lcore 78 as core 6 on socket 0 00:05:09.734 EAL: Detected lcore 79 as core 7 on socket 0 00:05:09.734 EAL: Detected lcore 80 as core 8 on socket 0 00:05:09.734 EAL: Detected lcore 81 as core 9 on socket 0 00:05:09.734 EAL: Detected lcore 82 as core 10 on socket 0 00:05:09.734 EAL: Detected lcore 83 as core 11 on socket 0 00:05:09.734 EAL: Detected lcore 84 as core 12 on socket 0 00:05:09.734 EAL: Detected lcore 85 as core 13 on socket 0 00:05:09.734 EAL: Detected lcore 86 as core 14 on socket 0 00:05:09.734 EAL: Detected lcore 87 as core 15 on socket 0 00:05:09.734 EAL: Detected lcore 88 as core 16 on socket 0 00:05:09.734 EAL: Detected lcore 89 as core 17 on socket 0 00:05:09.734 EAL: Detected lcore 90 as core 18 on socket 0 00:05:09.735 EAL: Detected lcore 91 as core 19 on socket 0 00:05:09.735 EAL: Detected lcore 92 as core 20 on socket 0 00:05:09.735 EAL: Detected lcore 93 as core 21 on socket 0 00:05:09.735 EAL: Detected lcore 94 as core 22 on socket 0 00:05:09.735 EAL: Detected lcore 95 as core 23 on socket 0 00:05:09.735 EAL: Detected lcore 96 as core 24 on socket 0 00:05:09.735 EAL: Detected lcore 97 as core 25 on socket 0 00:05:09.735 EAL: Detected lcore 98 as core 26 on socket 0 00:05:09.735 EAL: Detected lcore 99 as core 27 on socket 0 00:05:09.735 EAL: Detected lcore 100 as core 28 on socket 0 00:05:09.735 EAL: Detected lcore 101 as core 29 on socket 0 00:05:09.735 EAL: Detected lcore 102 as core 30 on socket 0 00:05:09.735 EAL: Detected lcore 103 as core 31 on socket 0 00:05:09.735 EAL: Detected lcore 104 as core 32 on socket 0 00:05:09.735 EAL: Detected lcore 105 as core 33 on socket 0 00:05:09.735 EAL: Detected lcore 106 as core 34 on socket 0 00:05:09.735 EAL: Detected lcore 107 as core 35 on socket 0 00:05:09.735 EAL: Detected lcore 108 as core 0 on socket 1 00:05:09.735 EAL: Detected lcore 109 as core 1 on socket 1 00:05:09.735 EAL: Detected lcore 110 as core 2 on socket 1 00:05:09.735 EAL: Detected lcore 111 as core 3 on socket 1 00:05:09.735 EAL: Detected lcore 112 as core 4 on socket 1 00:05:09.735 EAL: Detected lcore 113 as core 5 on socket 1 00:05:09.735 EAL: Detected lcore 114 as core 6 on socket 1 00:05:09.735 EAL: Detected lcore 115 as core 7 on socket 1 00:05:09.735 EAL: Detected lcore 116 as core 8 on socket 1 00:05:09.735 EAL: Detected lcore 117 as core 9 on socket 1 00:05:09.735 EAL: Detected lcore 118 as core 10 on socket 1 00:05:09.735 EAL: Detected lcore 119 as core 11 on socket 1 00:05:09.735 EAL: Detected lcore 120 as core 12 on socket 1 00:05:09.735 EAL: Detected lcore 121 as core 13 on socket 1 00:05:09.735 EAL: Detected lcore 122 as core 14 on socket 1 00:05:09.735 EAL: Detected lcore 123 as core 15 on socket 1 00:05:09.735 EAL: Detected lcore 124 as core 16 on socket 1 00:05:09.735 EAL: Detected lcore 125 as core 17 on socket 1 00:05:09.735 EAL: Detected lcore 126 as core 18 on socket 1 00:05:09.735 EAL: Detected lcore 127 as core 19 on socket 1 00:05:09.735 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:09.735 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:09.735 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:09.735 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:09.735 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:09.735 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:09.735 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:09.735 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:09.735 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:09.735 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:09.735 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:09.735 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:09.735 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:09.735 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:09.735 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:09.735 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:09.735 EAL: Maximum logical cores by configuration: 128 00:05:09.735 EAL: Detected CPU lcores: 128 00:05:09.735 EAL: Detected NUMA nodes: 2 00:05:09.735 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:09.735 EAL: Detected shared linkage of DPDK 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:09.735 EAL: Registered [vdev] bus. 00:05:09.735 EAL: bus.vdev log level changed from disabled to notice 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:09.735 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:09.735 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:09.735 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:09.735 EAL: No shared files mode enabled, IPC will be disabled 00:05:09.735 EAL: No shared files mode enabled, IPC is disabled 00:05:09.735 EAL: Bus pci wants IOVA as 'DC' 00:05:09.735 EAL: Bus vdev wants IOVA as 'DC' 00:05:09.735 EAL: Buses did not request a specific IOVA mode. 00:05:09.735 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:09.735 EAL: Selected IOVA mode 'VA' 00:05:09.735 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.735 EAL: Probing VFIO support... 00:05:09.735 EAL: IOMMU type 1 (Type 1) is supported 00:05:09.735 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:09.735 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:09.735 EAL: VFIO support initialized 00:05:09.735 EAL: Ask a virtual area of 0x2e000 bytes 00:05:09.735 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:09.735 EAL: Setting up physically contiguous memory... 00:05:09.735 EAL: Setting maximum number of open files to 524288 00:05:09.735 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:09.735 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:09.735 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:09.735 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:09.735 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.735 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:09.735 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.735 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.735 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:09.735 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:09.735 EAL: Hugepages will be freed exactly as allocated. 00:05:09.735 EAL: No shared files mode enabled, IPC is disabled 00:05:09.735 EAL: No shared files mode enabled, IPC is disabled 00:05:09.735 EAL: TSC frequency is ~2400000 KHz 00:05:09.735 EAL: Main lcore 0 is ready (tid=7f6e81f48a00;cpuset=[0]) 00:05:09.735 EAL: Trying to obtain current memory policy. 00:05:09.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.735 EAL: Restoring previous memory policy: 0 00:05:09.735 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 2MB 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:09.736 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.736 00:05:09.736 00:05:09.736 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.736 http://cunit.sourceforge.net/ 00:05:09.736 00:05:09.736 00:05:09.736 Suite: components_suite 00:05:09.736 Test: vtophys_malloc_test ...passed 00:05:09.736 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.736 EAL: Restoring previous memory policy: 4 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.736 EAL: Trying to obtain current memory policy. 00:05:09.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.736 EAL: Restoring previous memory policy: 4 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.736 EAL: Trying to obtain current memory policy. 00:05:09.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.736 EAL: Restoring previous memory policy: 4 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.736 EAL: Trying to obtain current memory policy. 00:05:09.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.736 EAL: Restoring previous memory policy: 4 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.736 EAL: Trying to obtain current memory policy. 00:05:09.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.736 EAL: Restoring previous memory policy: 4 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.736 EAL: Trying to obtain current memory policy. 00:05:09.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.736 EAL: Restoring previous memory policy: 4 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.736 EAL: Trying to obtain current memory policy. 00:05:09.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.736 EAL: Restoring previous memory policy: 4 00:05:09.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.736 EAL: request: mp_malloc_sync 00:05:09.736 EAL: No shared files mode enabled, IPC is disabled 00:05:09.736 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.997 EAL: request: mp_malloc_sync 00:05:09.997 EAL: No shared files mode enabled, IPC is disabled 00:05:09.997 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.997 EAL: Trying to obtain current memory policy. 00:05:09.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.997 EAL: Restoring previous memory policy: 4 00:05:09.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.997 EAL: request: mp_malloc_sync 00:05:09.997 EAL: No shared files mode enabled, IPC is disabled 00:05:09.997 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.997 EAL: request: mp_malloc_sync 00:05:09.997 EAL: No shared files mode enabled, IPC is disabled 00:05:09.997 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.997 EAL: Trying to obtain current memory policy. 00:05:09.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.997 EAL: Restoring previous memory policy: 4 00:05:09.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.997 EAL: request: mp_malloc_sync 00:05:09.997 EAL: No shared files mode enabled, IPC is disabled 00:05:09.997 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.997 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.257 EAL: request: mp_malloc_sync 00:05:10.257 EAL: No shared files mode enabled, IPC is disabled 00:05:10.257 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.257 EAL: Trying to obtain current memory policy. 00:05:10.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.258 EAL: Restoring previous memory policy: 4 00:05:10.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.258 EAL: request: mp_malloc_sync 00:05:10.258 EAL: No shared files mode enabled, IPC is disabled 00:05:10.258 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.518 EAL: request: mp_malloc_sync 00:05:10.518 EAL: No shared files mode enabled, IPC is disabled 00:05:10.518 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.518 passed 00:05:10.518 00:05:10.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.518 suites 1 1 n/a 0 0 00:05:10.518 tests 2 2 2 0 0 00:05:10.518 asserts 497 497 497 0 n/a 00:05:10.518 00:05:10.518 Elapsed time = 0.645 seconds 00:05:10.518 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.518 EAL: request: mp_malloc_sync 00:05:10.518 EAL: No shared files mode enabled, IPC is disabled 00:05:10.518 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.518 EAL: No shared files mode enabled, IPC is disabled 00:05:10.518 EAL: No shared files mode enabled, IPC is disabled 00:05:10.518 EAL: No shared files mode enabled, IPC is disabled 00:05:10.518 00:05:10.519 real 0m0.771s 00:05:10.519 user 0m0.407s 00:05:10.519 sys 0m0.328s 00:05:10.519 11:59:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.519 11:59:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.519 ************************************ 00:05:10.519 END TEST env_vtophys 00:05:10.519 ************************************ 00:05:10.519 11:59:23 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.519 11:59:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.519 11:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.519 11:59:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.519 ************************************ 00:05:10.519 START TEST env_pci 00:05:10.519 ************************************ 00:05:10.519 11:59:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.519 00:05:10.519 00:05:10.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.519 http://cunit.sourceforge.net/ 00:05:10.519 00:05:10.519 00:05:10.519 Suite: pci 00:05:10.519 Test: pci_hook ...[2024-06-11 11:59:23.454576] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1259755 has claimed it 00:05:10.519 EAL: Cannot find device (10000:00:01.0) 00:05:10.519 EAL: Failed to attach device on primary process 00:05:10.519 passed 00:05:10.519 00:05:10.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.519 suites 1 1 n/a 0 0 00:05:10.519 tests 1 1 1 0 0 00:05:10.519 asserts 25 25 25 0 n/a 00:05:10.519 00:05:10.519 Elapsed time = 0.030 seconds 00:05:10.519 00:05:10.519 real 0m0.050s 00:05:10.519 user 0m0.014s 00:05:10.519 sys 0m0.036s 00:05:10.519 11:59:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.519 11:59:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.519 ************************************ 00:05:10.519 END TEST env_pci 00:05:10.519 ************************************ 00:05:10.519 11:59:23 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.519 11:59:23 -- env/env.sh@15 -- # uname 00:05:10.519 11:59:23 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.519 11:59:23 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.519 11:59:23 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.519 11:59:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:10.519 11:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.519 11:59:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.519 ************************************ 00:05:10.519 START TEST env_dpdk_post_init 00:05:10.519 ************************************ 00:05:10.519 11:59:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.779 EAL: Detected CPU lcores: 128 00:05:10.779 EAL: Detected NUMA nodes: 2 00:05:10.779 EAL: Detected shared linkage of DPDK 00:05:10.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.779 EAL: Selected IOVA mode 'VA' 00:05:10.779 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.779 EAL: VFIO support initialized 00:05:10.779 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.779 EAL: Using IOMMU type 1 (Type 1) 00:05:10.779 EAL: Ignore mapping IO port bar(1) 00:05:11.040 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:11.040 EAL: Ignore mapping IO port bar(1) 00:05:11.300 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:11.300 EAL: Ignore mapping IO port bar(1) 00:05:11.300 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:11.561 EAL: Ignore mapping IO port bar(1) 00:05:11.561 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:11.821 EAL: Ignore mapping IO port bar(1) 00:05:11.821 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:12.081 EAL: Ignore mapping IO port bar(1) 00:05:12.081 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:12.081 EAL: Ignore mapping IO port bar(1) 00:05:12.341 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:12.341 EAL: Ignore mapping IO port bar(1) 00:05:12.601 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:12.861 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:12.861 EAL: Ignore mapping IO port bar(1) 00:05:12.861 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:13.121 EAL: Ignore mapping IO port bar(1) 00:05:13.121 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:13.382 EAL: Ignore mapping IO port bar(1) 00:05:13.382 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:13.642 EAL: Ignore mapping IO port bar(1) 00:05:13.642 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:13.967 EAL: Ignore mapping IO port bar(1) 00:05:13.967 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:13.967 EAL: Ignore mapping IO port bar(1) 00:05:14.227 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:14.227 EAL: Ignore mapping IO port bar(1) 00:05:14.227 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:14.487 EAL: Ignore mapping IO port bar(1) 00:05:14.487 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:14.487 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:14.487 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:14.748 Starting DPDK initialization... 00:05:14.748 Starting SPDK post initialization... 00:05:14.748 SPDK NVMe probe 00:05:14.748 Attaching to 0000:65:00.0 00:05:14.748 Attached to 0000:65:00.0 00:05:14.748 Cleaning up... 00:05:16.659 00:05:16.659 real 0m5.709s 00:05:16.659 user 0m0.171s 00:05:16.659 sys 0m0.083s 00:05:16.659 11:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.659 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.659 ************************************ 00:05:16.659 END TEST env_dpdk_post_init 00:05:16.659 ************************************ 00:05:16.659 11:59:29 -- env/env.sh@26 -- # uname 00:05:16.659 11:59:29 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:16.659 11:59:29 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.659 11:59:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.659 11:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.659 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.659 ************************************ 00:05:16.659 START TEST env_mem_callbacks 00:05:16.659 ************************************ 00:05:16.659 11:59:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.659 EAL: Detected CPU lcores: 128 00:05:16.659 EAL: Detected NUMA nodes: 2 00:05:16.659 EAL: Detected shared linkage of DPDK 00:05:16.659 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.659 EAL: Selected IOVA mode 'VA' 00:05:16.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.659 EAL: VFIO support initialized 00:05:16.659 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.659 00:05:16.659 00:05:16.659 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.659 http://cunit.sourceforge.net/ 00:05:16.659 00:05:16.659 00:05:16.659 Suite: memory 00:05:16.659 Test: test ... 00:05:16.659 register 0x200000200000 2097152 00:05:16.659 malloc 3145728 00:05:16.659 register 0x200000400000 4194304 00:05:16.659 buf 0x200000500000 len 3145728 PASSED 00:05:16.659 malloc 64 00:05:16.659 buf 0x2000004fff40 len 64 PASSED 00:05:16.659 malloc 4194304 00:05:16.659 register 0x200000800000 6291456 00:05:16.659 buf 0x200000a00000 len 4194304 PASSED 00:05:16.659 free 0x200000500000 3145728 00:05:16.659 free 0x2000004fff40 64 00:05:16.659 unregister 0x200000400000 4194304 PASSED 00:05:16.659 free 0x200000a00000 4194304 00:05:16.659 unregister 0x200000800000 6291456 PASSED 00:05:16.659 malloc 8388608 00:05:16.659 register 0x200000400000 10485760 00:05:16.659 buf 0x200000600000 len 8388608 PASSED 00:05:16.659 free 0x200000600000 8388608 00:05:16.659 unregister 0x200000400000 10485760 PASSED 00:05:16.659 passed 00:05:16.659 00:05:16.659 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.659 suites 1 1 n/a 0 0 00:05:16.659 tests 1 1 1 0 0 00:05:16.659 asserts 15 15 15 0 n/a 00:05:16.659 00:05:16.659 Elapsed time = 0.004 seconds 00:05:16.659 00:05:16.659 real 0m0.057s 00:05:16.659 user 0m0.016s 00:05:16.659 sys 0m0.041s 00:05:16.659 11:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.659 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.659 ************************************ 00:05:16.659 END TEST env_mem_callbacks 00:05:16.659 ************************************ 00:05:16.659 00:05:16.659 real 0m7.092s 00:05:16.659 user 0m0.912s 00:05:16.659 sys 0m0.722s 00:05:16.659 11:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.659 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.659 ************************************ 00:05:16.659 END TEST env 00:05:16.659 ************************************ 00:05:16.659 11:59:29 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.659 11:59:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.659 11:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.659 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.659 ************************************ 00:05:16.659 START TEST rpc 00:05:16.659 ************************************ 00:05:16.659 11:59:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.659 * Looking for test storage... 00:05:16.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.659 11:59:29 -- rpc/rpc.sh@65 -- # spdk_pid=1260951 00:05:16.659 11:59:29 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.659 11:59:29 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:16.659 11:59:29 -- rpc/rpc.sh@67 -- # waitforlisten 1260951 00:05:16.659 11:59:29 -- common/autotest_common.sh@819 -- # '[' -z 1260951 ']' 00:05:16.659 11:59:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.659 11:59:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.659 11:59:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.659 11:59:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.659 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.659 [2024-06-11 11:59:29.581253] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:16.659 [2024-06-11 11:59:29.581313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260951 ] 00:05:16.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.659 [2024-06-11 11:59:29.647656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.659 [2024-06-11 11:59:29.683761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.659 [2024-06-11 11:59:29.683918] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.659 [2024-06-11 11:59:29.683932] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1260951' to capture a snapshot of events at runtime. 00:05:16.659 [2024-06-11 11:59:29.683941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1260951 for offline analysis/debug. 00:05:16.659 [2024-06-11 11:59:29.683966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.603 11:59:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.603 11:59:30 -- common/autotest_common.sh@852 -- # return 0 00:05:17.604 11:59:30 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.604 11:59:30 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.604 11:59:30 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:17.604 11:59:30 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:17.604 11:59:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.604 11:59:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 ************************************ 00:05:17.604 START TEST rpc_integrity 00:05:17.604 ************************************ 00:05:17.604 11:59:30 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:17.604 11:59:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.604 11:59:30 -- rpc/rpc.sh@13 -- # jq length 00:05:17.604 11:59:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.604 11:59:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:17.604 11:59:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.604 { 00:05:17.604 "name": "Malloc0", 00:05:17.604 "aliases": [ 00:05:17.604 "32c42cb8-d905-45b1-9fd9-1b5d4113e5df" 00:05:17.604 ], 00:05:17.604 "product_name": "Malloc disk", 00:05:17.604 "block_size": 512, 00:05:17.604 "num_blocks": 16384, 00:05:17.604 "uuid": "32c42cb8-d905-45b1-9fd9-1b5d4113e5df", 00:05:17.604 "assigned_rate_limits": { 00:05:17.604 "rw_ios_per_sec": 0, 00:05:17.604 "rw_mbytes_per_sec": 0, 00:05:17.604 "r_mbytes_per_sec": 0, 00:05:17.604 "w_mbytes_per_sec": 0 00:05:17.604 }, 00:05:17.604 "claimed": false, 00:05:17.604 "zoned": false, 00:05:17.604 "supported_io_types": { 00:05:17.604 "read": true, 00:05:17.604 "write": true, 00:05:17.604 "unmap": true, 00:05:17.604 "write_zeroes": true, 00:05:17.604 "flush": true, 00:05:17.604 "reset": true, 00:05:17.604 "compare": false, 00:05:17.604 "compare_and_write": false, 00:05:17.604 "abort": true, 00:05:17.604 "nvme_admin": false, 00:05:17.604 "nvme_io": false 00:05:17.604 }, 00:05:17.604 "memory_domains": [ 00:05:17.604 { 00:05:17.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.604 "dma_device_type": 2 00:05:17.604 } 00:05:17.604 ], 00:05:17.604 "driver_specific": {} 00:05:17.604 } 00:05:17.604 ]' 00:05:17.604 11:59:30 -- rpc/rpc.sh@17 -- # jq length 00:05:17.604 11:59:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.604 11:59:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 [2024-06-11 11:59:30.502008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:17.604 [2024-06-11 11:59:30.502048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.604 [2024-06-11 11:59:30.502060] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff6a70 00:05:17.604 [2024-06-11 11:59:30.502067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.604 [2024-06-11 11:59:30.503361] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.604 [2024-06-11 11:59:30.503382] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.604 Passthru0 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.604 { 00:05:17.604 "name": "Malloc0", 00:05:17.604 "aliases": [ 00:05:17.604 "32c42cb8-d905-45b1-9fd9-1b5d4113e5df" 00:05:17.604 ], 00:05:17.604 "product_name": "Malloc disk", 00:05:17.604 "block_size": 512, 00:05:17.604 "num_blocks": 16384, 00:05:17.604 "uuid": "32c42cb8-d905-45b1-9fd9-1b5d4113e5df", 00:05:17.604 "assigned_rate_limits": { 00:05:17.604 "rw_ios_per_sec": 0, 00:05:17.604 "rw_mbytes_per_sec": 0, 00:05:17.604 "r_mbytes_per_sec": 0, 00:05:17.604 "w_mbytes_per_sec": 0 00:05:17.604 }, 00:05:17.604 "claimed": true, 00:05:17.604 "claim_type": "exclusive_write", 00:05:17.604 "zoned": false, 00:05:17.604 "supported_io_types": { 00:05:17.604 "read": true, 00:05:17.604 "write": true, 00:05:17.604 "unmap": true, 00:05:17.604 "write_zeroes": true, 00:05:17.604 "flush": true, 00:05:17.604 "reset": true, 00:05:17.604 "compare": false, 00:05:17.604 "compare_and_write": false, 00:05:17.604 "abort": true, 00:05:17.604 "nvme_admin": false, 00:05:17.604 "nvme_io": false 00:05:17.604 }, 00:05:17.604 "memory_domains": [ 00:05:17.604 { 00:05:17.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.604 "dma_device_type": 2 00:05:17.604 } 00:05:17.604 ], 00:05:17.604 "driver_specific": {} 00:05:17.604 }, 00:05:17.604 { 00:05:17.604 "name": "Passthru0", 00:05:17.604 "aliases": [ 00:05:17.604 "5cac2ef6-8199-5f02-a3e6-8cba131eeee5" 00:05:17.604 ], 00:05:17.604 "product_name": "passthru", 00:05:17.604 "block_size": 512, 00:05:17.604 "num_blocks": 16384, 00:05:17.604 "uuid": "5cac2ef6-8199-5f02-a3e6-8cba131eeee5", 00:05:17.604 "assigned_rate_limits": { 00:05:17.604 "rw_ios_per_sec": 0, 00:05:17.604 "rw_mbytes_per_sec": 0, 00:05:17.604 "r_mbytes_per_sec": 0, 00:05:17.604 "w_mbytes_per_sec": 0 00:05:17.604 }, 00:05:17.604 "claimed": false, 00:05:17.604 "zoned": false, 00:05:17.604 "supported_io_types": { 00:05:17.604 "read": true, 00:05:17.604 "write": true, 00:05:17.604 "unmap": true, 00:05:17.604 "write_zeroes": true, 00:05:17.604 "flush": true, 00:05:17.604 "reset": true, 00:05:17.604 "compare": false, 00:05:17.604 "compare_and_write": false, 00:05:17.604 "abort": true, 00:05:17.604 "nvme_admin": false, 00:05:17.604 "nvme_io": false 00:05:17.604 }, 00:05:17.604 "memory_domains": [ 00:05:17.604 { 00:05:17.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.604 "dma_device_type": 2 00:05:17.604 } 00:05:17.604 ], 00:05:17.604 "driver_specific": { 00:05:17.604 "passthru": { 00:05:17.604 "name": "Passthru0", 00:05:17.604 "base_bdev_name": "Malloc0" 00:05:17.604 } 00:05:17.604 } 00:05:17.604 } 00:05:17.604 ]' 00:05:17.604 11:59:30 -- rpc/rpc.sh@21 -- # jq length 00:05:17.604 11:59:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.604 11:59:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.604 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.604 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.604 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.604 11:59:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.604 11:59:30 -- rpc/rpc.sh@26 -- # jq length 00:05:17.865 11:59:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.865 00:05:17.865 real 0m0.288s 00:05:17.865 user 0m0.182s 00:05:17.865 sys 0m0.036s 00:05:17.865 11:59:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.865 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.865 ************************************ 00:05:17.865 END TEST rpc_integrity 00:05:17.865 ************************************ 00:05:17.865 11:59:30 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.865 11:59:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.865 11:59:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.865 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.865 ************************************ 00:05:17.866 START TEST rpc_plugins 00:05:17.866 ************************************ 00:05:17.866 11:59:30 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:17.866 11:59:30 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.866 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.866 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.866 11:59:30 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.866 11:59:30 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.866 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.866 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.866 11:59:30 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.866 { 00:05:17.866 "name": "Malloc1", 00:05:17.866 "aliases": [ 00:05:17.866 "3c9fa71d-b781-4c59-9d32-4aefb6c6a6b7" 00:05:17.866 ], 00:05:17.866 "product_name": "Malloc disk", 00:05:17.866 "block_size": 4096, 00:05:17.866 "num_blocks": 256, 00:05:17.866 "uuid": "3c9fa71d-b781-4c59-9d32-4aefb6c6a6b7", 00:05:17.866 "assigned_rate_limits": { 00:05:17.866 "rw_ios_per_sec": 0, 00:05:17.866 "rw_mbytes_per_sec": 0, 00:05:17.866 "r_mbytes_per_sec": 0, 00:05:17.866 "w_mbytes_per_sec": 0 00:05:17.866 }, 00:05:17.866 "claimed": false, 00:05:17.866 "zoned": false, 00:05:17.866 "supported_io_types": { 00:05:17.866 "read": true, 00:05:17.866 "write": true, 00:05:17.866 "unmap": true, 00:05:17.866 "write_zeroes": true, 00:05:17.866 "flush": true, 00:05:17.866 "reset": true, 00:05:17.866 "compare": false, 00:05:17.866 "compare_and_write": false, 00:05:17.866 "abort": true, 00:05:17.866 "nvme_admin": false, 00:05:17.866 "nvme_io": false 00:05:17.866 }, 00:05:17.866 "memory_domains": [ 00:05:17.866 { 00:05:17.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.866 "dma_device_type": 2 00:05:17.866 } 00:05:17.866 ], 00:05:17.866 "driver_specific": {} 00:05:17.866 } 00:05:17.866 ]' 00:05:17.866 11:59:30 -- rpc/rpc.sh@32 -- # jq length 00:05:17.866 11:59:30 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.866 11:59:30 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.866 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.866 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.866 11:59:30 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.866 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.866 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.866 11:59:30 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.866 11:59:30 -- rpc/rpc.sh@36 -- # jq length 00:05:17.866 11:59:30 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.866 00:05:17.866 real 0m0.144s 00:05:17.866 user 0m0.091s 00:05:17.866 sys 0m0.020s 00:05:17.866 11:59:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.866 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 ************************************ 00:05:17.866 END TEST rpc_plugins 00:05:17.866 ************************************ 00:05:17.866 11:59:30 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.866 11:59:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.866 11:59:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.866 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 ************************************ 00:05:17.866 START TEST rpc_trace_cmd_test 00:05:17.866 ************************************ 00:05:17.866 11:59:30 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:17.866 11:59:30 -- rpc/rpc.sh@40 -- # local info 00:05:17.866 11:59:30 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.866 11:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.866 11:59:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.866 11:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.866 11:59:30 -- rpc/rpc.sh@42 -- # info='{ 00:05:17.866 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1260951", 00:05:17.866 "tpoint_group_mask": "0x8", 00:05:17.866 "iscsi_conn": { 00:05:17.866 "mask": "0x2", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "scsi": { 00:05:17.866 "mask": "0x4", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "bdev": { 00:05:17.866 "mask": "0x8", 00:05:17.866 "tpoint_mask": "0xffffffffffffffff" 00:05:17.866 }, 00:05:17.866 "nvmf_rdma": { 00:05:17.866 "mask": "0x10", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "nvmf_tcp": { 00:05:17.866 "mask": "0x20", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "ftl": { 00:05:17.866 "mask": "0x40", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "blobfs": { 00:05:17.866 "mask": "0x80", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "dsa": { 00:05:17.866 "mask": "0x200", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "thread": { 00:05:17.866 "mask": "0x400", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "nvme_pcie": { 00:05:17.866 "mask": "0x800", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "iaa": { 00:05:17.866 "mask": "0x1000", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "nvme_tcp": { 00:05:17.866 "mask": "0x2000", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 }, 00:05:17.866 "bdev_nvme": { 00:05:17.866 "mask": "0x4000", 00:05:17.866 "tpoint_mask": "0x0" 00:05:17.866 } 00:05:17.866 }' 00:05:18.127 11:59:30 -- rpc/rpc.sh@43 -- # jq length 00:05:18.127 11:59:30 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:18.127 11:59:30 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:18.127 11:59:30 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:18.127 11:59:30 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:18.127 11:59:31 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:18.127 11:59:31 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:18.127 11:59:31 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:18.127 11:59:31 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:18.127 11:59:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:18.127 00:05:18.127 real 0m0.244s 00:05:18.127 user 0m0.209s 00:05:18.127 sys 0m0.025s 00:05:18.127 11:59:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.127 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.127 ************************************ 00:05:18.127 END TEST rpc_trace_cmd_test 00:05:18.127 ************************************ 00:05:18.127 11:59:31 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:18.127 11:59:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:18.127 11:59:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:18.127 11:59:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.127 11:59:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.388 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.388 ************************************ 00:05:18.388 START TEST rpc_daemon_integrity 00:05:18.388 ************************************ 00:05:18.388 11:59:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:18.388 11:59:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.388 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.388 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.388 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.388 11:59:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.388 11:59:31 -- rpc/rpc.sh@13 -- # jq length 00:05:18.388 11:59:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.388 11:59:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.388 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.388 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.388 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.388 11:59:31 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:18.388 11:59:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.388 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.388 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.388 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.388 11:59:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.388 { 00:05:18.388 "name": "Malloc2", 00:05:18.388 "aliases": [ 00:05:18.388 "27213571-ae17-4c47-8a44-3add17d54165" 00:05:18.388 ], 00:05:18.388 "product_name": "Malloc disk", 00:05:18.388 "block_size": 512, 00:05:18.388 "num_blocks": 16384, 00:05:18.388 "uuid": "27213571-ae17-4c47-8a44-3add17d54165", 00:05:18.388 "assigned_rate_limits": { 00:05:18.388 "rw_ios_per_sec": 0, 00:05:18.388 "rw_mbytes_per_sec": 0, 00:05:18.388 "r_mbytes_per_sec": 0, 00:05:18.388 "w_mbytes_per_sec": 0 00:05:18.388 }, 00:05:18.388 "claimed": false, 00:05:18.388 "zoned": false, 00:05:18.388 "supported_io_types": { 00:05:18.388 "read": true, 00:05:18.388 "write": true, 00:05:18.388 "unmap": true, 00:05:18.388 "write_zeroes": true, 00:05:18.388 "flush": true, 00:05:18.388 "reset": true, 00:05:18.388 "compare": false, 00:05:18.388 "compare_and_write": false, 00:05:18.388 "abort": true, 00:05:18.388 "nvme_admin": false, 00:05:18.388 "nvme_io": false 00:05:18.388 }, 00:05:18.388 "memory_domains": [ 00:05:18.388 { 00:05:18.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.388 "dma_device_type": 2 00:05:18.388 } 00:05:18.388 ], 00:05:18.388 "driver_specific": {} 00:05:18.388 } 00:05:18.388 ]' 00:05:18.388 11:59:31 -- rpc/rpc.sh@17 -- # jq length 00:05:18.388 11:59:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.388 11:59:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:18.388 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.388 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.388 [2024-06-11 11:59:31.308193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:18.388 [2024-06-11 11:59:31.308227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.388 [2024-06-11 11:59:31.308240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff79e0 00:05:18.388 [2024-06-11 11:59:31.308246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.388 [2024-06-11 11:59:31.309439] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.388 [2024-06-11 11:59:31.309459] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.388 Passthru0 00:05:18.388 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.388 11:59:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.388 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.388 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.388 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.388 11:59:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.388 { 00:05:18.388 "name": "Malloc2", 00:05:18.388 "aliases": [ 00:05:18.388 "27213571-ae17-4c47-8a44-3add17d54165" 00:05:18.388 ], 00:05:18.388 "product_name": "Malloc disk", 00:05:18.388 "block_size": 512, 00:05:18.388 "num_blocks": 16384, 00:05:18.388 "uuid": "27213571-ae17-4c47-8a44-3add17d54165", 00:05:18.388 "assigned_rate_limits": { 00:05:18.388 "rw_ios_per_sec": 0, 00:05:18.388 "rw_mbytes_per_sec": 0, 00:05:18.388 "r_mbytes_per_sec": 0, 00:05:18.388 "w_mbytes_per_sec": 0 00:05:18.388 }, 00:05:18.388 "claimed": true, 00:05:18.388 "claim_type": "exclusive_write", 00:05:18.388 "zoned": false, 00:05:18.388 "supported_io_types": { 00:05:18.388 "read": true, 00:05:18.388 "write": true, 00:05:18.388 "unmap": true, 00:05:18.388 "write_zeroes": true, 00:05:18.388 "flush": true, 00:05:18.388 "reset": true, 00:05:18.388 "compare": false, 00:05:18.389 "compare_and_write": false, 00:05:18.389 "abort": true, 00:05:18.389 "nvme_admin": false, 00:05:18.389 "nvme_io": false 00:05:18.389 }, 00:05:18.389 "memory_domains": [ 00:05:18.389 { 00:05:18.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.389 "dma_device_type": 2 00:05:18.389 } 00:05:18.389 ], 00:05:18.389 "driver_specific": {} 00:05:18.389 }, 00:05:18.389 { 00:05:18.389 "name": "Passthru0", 00:05:18.389 "aliases": [ 00:05:18.389 "83555239-025e-5bbe-82c3-c49e91a2433f" 00:05:18.389 ], 00:05:18.389 "product_name": "passthru", 00:05:18.389 "block_size": 512, 00:05:18.389 "num_blocks": 16384, 00:05:18.389 "uuid": "83555239-025e-5bbe-82c3-c49e91a2433f", 00:05:18.389 "assigned_rate_limits": { 00:05:18.389 "rw_ios_per_sec": 0, 00:05:18.389 "rw_mbytes_per_sec": 0, 00:05:18.389 "r_mbytes_per_sec": 0, 00:05:18.389 "w_mbytes_per_sec": 0 00:05:18.389 }, 00:05:18.389 "claimed": false, 00:05:18.389 "zoned": false, 00:05:18.389 "supported_io_types": { 00:05:18.389 "read": true, 00:05:18.389 "write": true, 00:05:18.389 "unmap": true, 00:05:18.389 "write_zeroes": true, 00:05:18.389 "flush": true, 00:05:18.389 "reset": true, 00:05:18.389 "compare": false, 00:05:18.389 "compare_and_write": false, 00:05:18.389 "abort": true, 00:05:18.389 "nvme_admin": false, 00:05:18.389 "nvme_io": false 00:05:18.389 }, 00:05:18.389 "memory_domains": [ 00:05:18.389 { 00:05:18.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.389 "dma_device_type": 2 00:05:18.389 } 00:05:18.389 ], 00:05:18.389 "driver_specific": { 00:05:18.389 "passthru": { 00:05:18.389 "name": "Passthru0", 00:05:18.389 "base_bdev_name": "Malloc2" 00:05:18.389 } 00:05:18.389 } 00:05:18.389 } 00:05:18.389 ]' 00:05:18.389 11:59:31 -- rpc/rpc.sh@21 -- # jq length 00:05:18.389 11:59:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.389 11:59:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.389 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.389 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.389 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.389 11:59:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:18.389 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.389 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.389 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.389 11:59:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.389 11:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.389 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.389 11:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.389 11:59:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.389 11:59:31 -- rpc/rpc.sh@26 -- # jq length 00:05:18.650 11:59:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.650 00:05:18.650 real 0m0.293s 00:05:18.650 user 0m0.193s 00:05:18.650 sys 0m0.031s 00:05:18.650 11:59:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.650 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.650 ************************************ 00:05:18.650 END TEST rpc_daemon_integrity 00:05:18.650 ************************************ 00:05:18.650 11:59:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:18.650 11:59:31 -- rpc/rpc.sh@84 -- # killprocess 1260951 00:05:18.650 11:59:31 -- common/autotest_common.sh@926 -- # '[' -z 1260951 ']' 00:05:18.650 11:59:31 -- common/autotest_common.sh@930 -- # kill -0 1260951 00:05:18.650 11:59:31 -- common/autotest_common.sh@931 -- # uname 00:05:18.650 11:59:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.650 11:59:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1260951 00:05:18.650 11:59:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.650 11:59:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.650 11:59:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1260951' 00:05:18.650 killing process with pid 1260951 00:05:18.650 11:59:31 -- common/autotest_common.sh@945 -- # kill 1260951 00:05:18.650 11:59:31 -- common/autotest_common.sh@950 -- # wait 1260951 00:05:18.911 00:05:18.911 real 0m2.312s 00:05:18.911 user 0m3.041s 00:05:18.911 sys 0m0.604s 00:05:18.911 11:59:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.911 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.911 ************************************ 00:05:18.911 END TEST rpc 00:05:18.911 ************************************ 00:05:18.911 11:59:31 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.911 11:59:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.911 11:59:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.911 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.911 ************************************ 00:05:18.911 START TEST rpc_client 00:05:18.911 ************************************ 00:05:18.911 11:59:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.911 * Looking for test storage... 00:05:18.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:18.911 11:59:31 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:18.911 OK 00:05:18.911 11:59:31 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.911 00:05:18.911 real 0m0.121s 00:05:18.911 user 0m0.054s 00:05:18.911 sys 0m0.075s 00:05:18.911 11:59:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.911 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.911 ************************************ 00:05:18.911 END TEST rpc_client 00:05:18.911 ************************************ 00:05:18.911 11:59:31 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:18.911 11:59:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.911 11:59:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.911 11:59:31 -- common/autotest_common.sh@10 -- # set +x 00:05:19.172 ************************************ 00:05:19.172 START TEST json_config 00:05:19.172 ************************************ 00:05:19.172 11:59:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.172 11:59:32 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.172 11:59:32 -- nvmf/common.sh@7 -- # uname -s 00:05:19.172 11:59:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.172 11:59:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.172 11:59:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.172 11:59:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.172 11:59:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.172 11:59:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.172 11:59:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.172 11:59:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.172 11:59:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.172 11:59:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.172 11:59:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.172 11:59:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.172 11:59:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.172 11:59:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.172 11:59:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.172 11:59:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.172 11:59:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.172 11:59:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.172 11:59:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.172 11:59:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.173 11:59:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.173 11:59:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.173 11:59:32 -- paths/export.sh@5 -- # export PATH 00:05:19.173 11:59:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.173 11:59:32 -- nvmf/common.sh@46 -- # : 0 00:05:19.173 11:59:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:19.173 11:59:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:19.173 11:59:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:19.173 11:59:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.173 11:59:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.173 11:59:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:19.173 11:59:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:19.173 11:59:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:19.173 11:59:32 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:19.173 11:59:32 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:19.173 11:59:32 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:19.173 11:59:32 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.173 11:59:32 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.173 11:59:32 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:19.173 11:59:32 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.173 11:59:32 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:19.173 11:59:32 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.173 11:59:32 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:19.173 11:59:32 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.173 11:59:32 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:19.173 11:59:32 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:19.173 11:59:32 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.173 11:59:32 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:19.173 INFO: JSON configuration test init 00:05:19.173 11:59:32 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:19.173 11:59:32 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:19.173 11:59:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:19.173 11:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.173 11:59:32 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:19.173 11:59:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:19.173 11:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.173 11:59:32 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.173 11:59:32 -- json_config/json_config.sh@98 -- # local app=target 00:05:19.173 11:59:32 -- json_config/json_config.sh@99 -- # shift 00:05:19.173 11:59:32 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:19.173 11:59:32 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:19.173 11:59:32 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:19.173 11:59:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.173 11:59:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.173 11:59:32 -- json_config/json_config.sh@111 -- # app_pid[$app]=1261819 00:05:19.173 11:59:32 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:19.173 Waiting for target to run... 00:05:19.173 11:59:32 -- json_config/json_config.sh@114 -- # waitforlisten 1261819 /var/tmp/spdk_tgt.sock 00:05:19.173 11:59:32 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.173 11:59:32 -- common/autotest_common.sh@819 -- # '[' -z 1261819 ']' 00:05:19.173 11:59:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.173 11:59:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.173 11:59:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.173 11:59:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.173 11:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.173 [2024-06-11 11:59:32.123703] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:19.173 [2024-06-11 11:59:32.123773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261819 ] 00:05:19.173 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.433 [2024-06-11 11:59:32.421098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.433 [2024-06-11 11:59:32.437947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.433 [2024-06-11 11:59:32.438086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.003 11:59:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:20.003 11:59:32 -- common/autotest_common.sh@852 -- # return 0 00:05:20.003 11:59:32 -- json_config/json_config.sh@115 -- # echo '' 00:05:20.003 00:05:20.003 11:59:32 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:20.003 11:59:32 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:20.003 11:59:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.003 11:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:20.003 11:59:32 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:20.003 11:59:32 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:20.003 11:59:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:20.003 11:59:32 -- common/autotest_common.sh@10 -- # set +x 00:05:20.003 11:59:32 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.003 11:59:32 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:20.003 11:59:32 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:20.574 11:59:33 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:20.574 11:59:33 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:20.574 11:59:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.574 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:05:20.574 11:59:33 -- json_config/json_config.sh@48 -- # local ret=0 00:05:20.574 11:59:33 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:20.574 11:59:33 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:20.574 11:59:33 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:20.574 11:59:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:20.574 11:59:33 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:20.834 11:59:33 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:20.834 11:59:33 -- json_config/json_config.sh@51 -- # local get_types 00:05:20.834 11:59:33 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:20.834 11:59:33 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:20.834 11:59:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:20.834 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:05:20.834 11:59:33 -- json_config/json_config.sh@58 -- # return 0 00:05:20.834 11:59:33 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:20.834 11:59:33 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:20.835 11:59:33 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:20.835 11:59:33 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:20.835 11:59:33 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:20.835 11:59:33 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:20.835 11:59:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.835 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:05:20.835 11:59:33 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:20.835 11:59:33 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:20.835 11:59:33 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:20.835 11:59:33 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.835 11:59:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:20.835 MallocForNvmf0 00:05:20.835 11:59:33 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.835 11:59:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:21.094 MallocForNvmf1 00:05:21.095 11:59:33 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.095 11:59:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.095 [2024-06-11 11:59:34.102601] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.355 11:59:34 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.355 11:59:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.355 11:59:34 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.355 11:59:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:21.615 11:59:34 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.615 11:59:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:21.615 11:59:34 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.615 11:59:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.876 [2024-06-11 11:59:34.744725] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.876 11:59:34 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:21.876 11:59:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:21.876 11:59:34 -- common/autotest_common.sh@10 -- # set +x 00:05:21.876 11:59:34 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:21.876 11:59:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:21.876 11:59:34 -- common/autotest_common.sh@10 -- # set +x 00:05:21.876 11:59:34 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:21.876 11:59:34 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.876 11:59:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.136 MallocBdevForConfigChangeCheck 00:05:22.136 11:59:35 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:22.136 11:59:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:22.136 11:59:35 -- common/autotest_common.sh@10 -- # set +x 00:05:22.136 11:59:35 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:22.136 11:59:35 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.397 11:59:35 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:22.397 INFO: shutting down applications... 00:05:22.397 11:59:35 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:22.397 11:59:35 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:22.397 11:59:35 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:22.397 11:59:35 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:22.968 Calling clear_iscsi_subsystem 00:05:22.968 Calling clear_nvmf_subsystem 00:05:22.968 Calling clear_nbd_subsystem 00:05:22.968 Calling clear_ublk_subsystem 00:05:22.968 Calling clear_vhost_blk_subsystem 00:05:22.968 Calling clear_vhost_scsi_subsystem 00:05:22.968 Calling clear_scheduler_subsystem 00:05:22.968 Calling clear_bdev_subsystem 00:05:22.968 Calling clear_accel_subsystem 00:05:22.968 Calling clear_vmd_subsystem 00:05:22.968 Calling clear_sock_subsystem 00:05:22.968 Calling clear_iobuf_subsystem 00:05:22.968 11:59:35 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:22.968 11:59:35 -- json_config/json_config.sh@396 -- # count=100 00:05:22.968 11:59:35 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:22.968 11:59:35 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.968 11:59:35 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:22.968 11:59:35 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:23.228 11:59:36 -- json_config/json_config.sh@398 -- # break 00:05:23.228 11:59:36 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:23.228 11:59:36 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:23.228 11:59:36 -- json_config/json_config.sh@120 -- # local app=target 00:05:23.228 11:59:36 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:23.228 11:59:36 -- json_config/json_config.sh@124 -- # [[ -n 1261819 ]] 00:05:23.228 11:59:36 -- json_config/json_config.sh@127 -- # kill -SIGINT 1261819 00:05:23.228 11:59:36 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:23.228 11:59:36 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:23.228 11:59:36 -- json_config/json_config.sh@130 -- # kill -0 1261819 00:05:23.228 11:59:36 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:23.800 11:59:36 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:23.800 11:59:36 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:23.800 11:59:36 -- json_config/json_config.sh@130 -- # kill -0 1261819 00:05:23.800 11:59:36 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:23.800 11:59:36 -- json_config/json_config.sh@132 -- # break 00:05:23.800 11:59:36 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:23.800 11:59:36 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:23.800 SPDK target shutdown done 00:05:23.800 11:59:36 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:23.800 INFO: relaunching applications... 00:05:23.800 11:59:36 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.800 11:59:36 -- json_config/json_config.sh@98 -- # local app=target 00:05:23.800 11:59:36 -- json_config/json_config.sh@99 -- # shift 00:05:23.800 11:59:36 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:23.800 11:59:36 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:23.800 11:59:36 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:23.800 11:59:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:23.800 11:59:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:23.800 11:59:36 -- json_config/json_config.sh@111 -- # app_pid[$app]=1262704 00:05:23.800 11:59:36 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:23.800 Waiting for target to run... 00:05:23.800 11:59:36 -- json_config/json_config.sh@114 -- # waitforlisten 1262704 /var/tmp/spdk_tgt.sock 00:05:23.800 11:59:36 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.800 11:59:36 -- common/autotest_common.sh@819 -- # '[' -z 1262704 ']' 00:05:23.800 11:59:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.800 11:59:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:23.800 11:59:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.800 11:59:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:23.800 11:59:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.800 [2024-06-11 11:59:36.610627] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:23.800 [2024-06-11 11:59:36.610701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262704 ] 00:05:23.800 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.061 [2024-06-11 11:59:37.019208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.061 [2024-06-11 11:59:37.036527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.061 [2024-06-11 11:59:37.036659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.631 [2024-06-11 11:59:37.503889] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.631 [2024-06-11 11:59:37.536265] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:25.202 11:59:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.202 11:59:38 -- common/autotest_common.sh@852 -- # return 0 00:05:25.202 11:59:38 -- json_config/json_config.sh@115 -- # echo '' 00:05:25.202 00:05:25.202 11:59:38 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:25.202 11:59:38 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:25.202 INFO: Checking if target configuration is the same... 00:05:25.202 11:59:38 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.202 11:59:38 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:25.202 11:59:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.202 + '[' 2 -ne 2 ']' 00:05:25.202 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.202 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.202 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.202 +++ basename /dev/fd/62 00:05:25.202 ++ mktemp /tmp/62.XXX 00:05:25.202 + tmp_file_1=/tmp/62.JL4 00:05:25.202 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.202 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.202 + tmp_file_2=/tmp/spdk_tgt_config.json.XTy 00:05:25.202 + ret=0 00:05:25.202 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.463 + diff -u /tmp/62.JL4 /tmp/spdk_tgt_config.json.XTy 00:05:25.463 + echo 'INFO: JSON config files are the same' 00:05:25.463 INFO: JSON config files are the same 00:05:25.463 + rm /tmp/62.JL4 /tmp/spdk_tgt_config.json.XTy 00:05:25.463 + exit 0 00:05:25.463 11:59:38 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:25.463 11:59:38 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:25.463 INFO: changing configuration and checking if this can be detected... 00:05:25.463 11:59:38 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.463 11:59:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.723 11:59:38 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.723 11:59:38 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:25.723 11:59:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.723 + '[' 2 -ne 2 ']' 00:05:25.723 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.723 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.723 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.723 +++ basename /dev/fd/62 00:05:25.723 ++ mktemp /tmp/62.XXX 00:05:25.723 + tmp_file_1=/tmp/62.UvD 00:05:25.723 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.723 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.723 + tmp_file_2=/tmp/spdk_tgt_config.json.ABG 00:05:25.723 + ret=0 00:05:25.723 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.983 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.983 + diff -u /tmp/62.UvD /tmp/spdk_tgt_config.json.ABG 00:05:25.983 + ret=1 00:05:25.983 + echo '=== Start of file: /tmp/62.UvD ===' 00:05:25.983 + cat /tmp/62.UvD 00:05:25.983 + echo '=== End of file: /tmp/62.UvD ===' 00:05:25.983 + echo '' 00:05:25.983 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ABG ===' 00:05:25.983 + cat /tmp/spdk_tgt_config.json.ABG 00:05:25.983 + echo '=== End of file: /tmp/spdk_tgt_config.json.ABG ===' 00:05:25.983 + echo '' 00:05:25.984 + rm /tmp/62.UvD /tmp/spdk_tgt_config.json.ABG 00:05:25.984 + exit 1 00:05:25.984 11:59:38 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:25.984 INFO: configuration change detected. 00:05:25.984 11:59:38 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:25.984 11:59:38 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:25.984 11:59:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.984 11:59:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.984 11:59:38 -- json_config/json_config.sh@360 -- # local ret=0 00:05:25.984 11:59:38 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:25.984 11:59:38 -- json_config/json_config.sh@370 -- # [[ -n 1262704 ]] 00:05:25.984 11:59:38 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:25.984 11:59:38 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:25.984 11:59:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.984 11:59:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.984 11:59:38 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:25.984 11:59:38 -- json_config/json_config.sh@246 -- # uname -s 00:05:25.984 11:59:38 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:25.984 11:59:38 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:25.984 11:59:38 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:25.984 11:59:38 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:25.984 11:59:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:25.984 11:59:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.984 11:59:38 -- json_config/json_config.sh@376 -- # killprocess 1262704 00:05:25.984 11:59:38 -- common/autotest_common.sh@926 -- # '[' -z 1262704 ']' 00:05:25.984 11:59:38 -- common/autotest_common.sh@930 -- # kill -0 1262704 00:05:25.984 11:59:38 -- common/autotest_common.sh@931 -- # uname 00:05:25.984 11:59:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.984 11:59:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1262704 00:05:25.984 11:59:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.984 11:59:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.984 11:59:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1262704' 00:05:25.984 killing process with pid 1262704 00:05:25.984 11:59:38 -- common/autotest_common.sh@945 -- # kill 1262704 00:05:25.984 11:59:38 -- common/autotest_common.sh@950 -- # wait 1262704 00:05:26.261 11:59:39 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.261 11:59:39 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:26.261 11:59:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:26.261 11:59:39 -- common/autotest_common.sh@10 -- # set +x 00:05:26.261 11:59:39 -- json_config/json_config.sh@381 -- # return 0 00:05:26.261 11:59:39 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:26.261 INFO: Success 00:05:26.261 00:05:26.261 real 0m7.325s 00:05:26.261 user 0m8.718s 00:05:26.261 sys 0m1.846s 00:05:26.261 11:59:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.261 11:59:39 -- common/autotest_common.sh@10 -- # set +x 00:05:26.261 ************************************ 00:05:26.261 END TEST json_config 00:05:26.261 ************************************ 00:05:26.572 11:59:39 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.572 11:59:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.572 11:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.572 11:59:39 -- common/autotest_common.sh@10 -- # set +x 00:05:26.572 ************************************ 00:05:26.572 START TEST json_config_extra_key 00:05:26.572 ************************************ 00:05:26.572 11:59:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.572 11:59:39 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.572 11:59:39 -- nvmf/common.sh@7 -- # uname -s 00:05:26.572 11:59:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.572 11:59:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.572 11:59:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.572 11:59:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.572 11:59:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.572 11:59:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.572 11:59:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.572 11:59:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.572 11:59:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.572 11:59:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.572 11:59:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.572 11:59:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:26.572 11:59:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.572 11:59:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.572 11:59:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.572 11:59:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.572 11:59:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.572 11:59:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.572 11:59:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.572 11:59:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.572 11:59:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.572 11:59:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.572 11:59:39 -- paths/export.sh@5 -- # export PATH 00:05:26.573 11:59:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.573 11:59:39 -- nvmf/common.sh@46 -- # : 0 00:05:26.573 11:59:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:26.573 11:59:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:26.573 11:59:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:26.573 11:59:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.573 11:59:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.573 11:59:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:26.573 11:59:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:26.573 11:59:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:26.573 INFO: launching applications... 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1263427 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:26.573 Waiting for target to run... 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1263427 /var/tmp/spdk_tgt.sock 00:05:26.573 11:59:39 -- common/autotest_common.sh@819 -- # '[' -z 1263427 ']' 00:05:26.573 11:59:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.573 11:59:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.573 11:59:39 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.573 11:59:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.573 11:59:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.573 11:59:39 -- common/autotest_common.sh@10 -- # set +x 00:05:26.573 [2024-06-11 11:59:39.461909] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:26.573 [2024-06-11 11:59:39.461971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263427 ] 00:05:26.573 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.833 [2024-06-11 11:59:39.684610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.833 [2024-06-11 11:59:39.699286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.833 [2024-06-11 11:59:39.699412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.403 11:59:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.403 11:59:40 -- common/autotest_common.sh@852 -- # return 0 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:27.403 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:27.403 INFO: shutting down applications... 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1263427 ]] 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1263427 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1263427 00:05:27.403 11:59:40 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1263427 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:27.973 SPDK target shutdown done 00:05:27.973 11:59:40 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:27.973 Success 00:05:27.973 00:05:27.973 real 0m1.435s 00:05:27.973 user 0m1.146s 00:05:27.973 sys 0m0.302s 00:05:27.973 11:59:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.973 11:59:40 -- common/autotest_common.sh@10 -- # set +x 00:05:27.973 ************************************ 00:05:27.973 END TEST json_config_extra_key 00:05:27.973 ************************************ 00:05:27.973 11:59:40 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.973 11:59:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.973 11:59:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.973 11:59:40 -- common/autotest_common.sh@10 -- # set +x 00:05:27.973 ************************************ 00:05:27.973 START TEST alias_rpc 00:05:27.973 ************************************ 00:05:27.973 11:59:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.973 * Looking for test storage... 00:05:27.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.973 11:59:40 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.973 11:59:40 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1263818 00:05:27.973 11:59:40 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1263818 00:05:27.973 11:59:40 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.973 11:59:40 -- common/autotest_common.sh@819 -- # '[' -z 1263818 ']' 00:05:27.973 11:59:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.973 11:59:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.973 11:59:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.973 11:59:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.973 11:59:40 -- common/autotest_common.sh@10 -- # set +x 00:05:27.973 [2024-06-11 11:59:40.936027] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:27.973 [2024-06-11 11:59:40.936086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263818 ] 00:05:27.973 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.973 [2024-06-11 11:59:40.999832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.233 [2024-06-11 11:59:41.030979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.233 [2024-06-11 11:59:41.031126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.805 11:59:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.805 11:59:41 -- common/autotest_common.sh@852 -- # return 0 00:05:28.805 11:59:41 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.066 11:59:41 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1263818 00:05:29.066 11:59:41 -- common/autotest_common.sh@926 -- # '[' -z 1263818 ']' 00:05:29.066 11:59:41 -- common/autotest_common.sh@930 -- # kill -0 1263818 00:05:29.066 11:59:41 -- common/autotest_common.sh@931 -- # uname 00:05:29.066 11:59:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:29.066 11:59:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1263818 00:05:29.066 11:59:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:29.066 11:59:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:29.066 11:59:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1263818' 00:05:29.066 killing process with pid 1263818 00:05:29.066 11:59:41 -- common/autotest_common.sh@945 -- # kill 1263818 00:05:29.066 11:59:41 -- common/autotest_common.sh@950 -- # wait 1263818 00:05:29.326 00:05:29.326 real 0m1.343s 00:05:29.326 user 0m1.495s 00:05:29.326 sys 0m0.357s 00:05:29.326 11:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.326 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.326 ************************************ 00:05:29.326 END TEST alias_rpc 00:05:29.326 ************************************ 00:05:29.326 11:59:42 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:29.326 11:59:42 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.326 11:59:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.326 11:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.326 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.326 ************************************ 00:05:29.326 START TEST spdkcli_tcp 00:05:29.326 ************************************ 00:05:29.326 11:59:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.326 * Looking for test storage... 00:05:29.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:29.326 11:59:42 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:29.326 11:59:42 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.326 11:59:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.326 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1264202 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@27 -- # waitforlisten 1264202 00:05:29.326 11:59:42 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.326 11:59:42 -- common/autotest_common.sh@819 -- # '[' -z 1264202 ']' 00:05:29.326 11:59:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.326 11:59:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.326 11:59:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.326 11:59:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.326 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.326 [2024-06-11 11:59:42.338275] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:29.326 [2024-06-11 11:59:42.338337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264202 ] 00:05:29.586 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.586 [2024-06-11 11:59:42.400779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.586 [2024-06-11 11:59:42.432841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.586 [2024-06-11 11:59:42.433058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.586 [2024-06-11 11:59:42.433089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.158 11:59:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.158 11:59:43 -- common/autotest_common.sh@852 -- # return 0 00:05:30.158 11:59:43 -- spdkcli/tcp.sh@31 -- # socat_pid=1264226 00:05:30.158 11:59:43 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.158 11:59:43 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.419 [ 00:05:30.419 "bdev_malloc_delete", 00:05:30.419 "bdev_malloc_create", 00:05:30.419 "bdev_null_resize", 00:05:30.419 "bdev_null_delete", 00:05:30.419 "bdev_null_create", 00:05:30.419 "bdev_nvme_cuse_unregister", 00:05:30.419 "bdev_nvme_cuse_register", 00:05:30.419 "bdev_opal_new_user", 00:05:30.419 "bdev_opal_set_lock_state", 00:05:30.419 "bdev_opal_delete", 00:05:30.419 "bdev_opal_get_info", 00:05:30.419 "bdev_opal_create", 00:05:30.419 "bdev_nvme_opal_revert", 00:05:30.419 "bdev_nvme_opal_init", 00:05:30.419 "bdev_nvme_send_cmd", 00:05:30.419 "bdev_nvme_get_path_iostat", 00:05:30.419 "bdev_nvme_get_mdns_discovery_info", 00:05:30.419 "bdev_nvme_stop_mdns_discovery", 00:05:30.419 "bdev_nvme_start_mdns_discovery", 00:05:30.419 "bdev_nvme_set_multipath_policy", 00:05:30.419 "bdev_nvme_set_preferred_path", 00:05:30.419 "bdev_nvme_get_io_paths", 00:05:30.419 "bdev_nvme_remove_error_injection", 00:05:30.419 "bdev_nvme_add_error_injection", 00:05:30.419 "bdev_nvme_get_discovery_info", 00:05:30.419 "bdev_nvme_stop_discovery", 00:05:30.419 "bdev_nvme_start_discovery", 00:05:30.419 "bdev_nvme_get_controller_health_info", 00:05:30.419 "bdev_nvme_disable_controller", 00:05:30.419 "bdev_nvme_enable_controller", 00:05:30.419 "bdev_nvme_reset_controller", 00:05:30.419 "bdev_nvme_get_transport_statistics", 00:05:30.419 "bdev_nvme_apply_firmware", 00:05:30.419 "bdev_nvme_detach_controller", 00:05:30.419 "bdev_nvme_get_controllers", 00:05:30.419 "bdev_nvme_attach_controller", 00:05:30.419 "bdev_nvme_set_hotplug", 00:05:30.420 "bdev_nvme_set_options", 00:05:30.420 "bdev_passthru_delete", 00:05:30.420 "bdev_passthru_create", 00:05:30.420 "bdev_lvol_grow_lvstore", 00:05:30.420 "bdev_lvol_get_lvols", 00:05:30.420 "bdev_lvol_get_lvstores", 00:05:30.420 "bdev_lvol_delete", 00:05:30.420 "bdev_lvol_set_read_only", 00:05:30.420 "bdev_lvol_resize", 00:05:30.420 "bdev_lvol_decouple_parent", 00:05:30.420 "bdev_lvol_inflate", 00:05:30.420 "bdev_lvol_rename", 00:05:30.420 "bdev_lvol_clone_bdev", 00:05:30.420 "bdev_lvol_clone", 00:05:30.420 "bdev_lvol_snapshot", 00:05:30.420 "bdev_lvol_create", 00:05:30.420 "bdev_lvol_delete_lvstore", 00:05:30.420 "bdev_lvol_rename_lvstore", 00:05:30.420 "bdev_lvol_create_lvstore", 00:05:30.420 "bdev_raid_set_options", 00:05:30.420 "bdev_raid_remove_base_bdev", 00:05:30.420 "bdev_raid_add_base_bdev", 00:05:30.420 "bdev_raid_delete", 00:05:30.420 "bdev_raid_create", 00:05:30.420 "bdev_raid_get_bdevs", 00:05:30.420 "bdev_error_inject_error", 00:05:30.420 "bdev_error_delete", 00:05:30.420 "bdev_error_create", 00:05:30.420 "bdev_split_delete", 00:05:30.420 "bdev_split_create", 00:05:30.420 "bdev_delay_delete", 00:05:30.420 "bdev_delay_create", 00:05:30.420 "bdev_delay_update_latency", 00:05:30.420 "bdev_zone_block_delete", 00:05:30.420 "bdev_zone_block_create", 00:05:30.420 "blobfs_create", 00:05:30.420 "blobfs_detect", 00:05:30.420 "blobfs_set_cache_size", 00:05:30.420 "bdev_aio_delete", 00:05:30.420 "bdev_aio_rescan", 00:05:30.420 "bdev_aio_create", 00:05:30.420 "bdev_ftl_set_property", 00:05:30.420 "bdev_ftl_get_properties", 00:05:30.420 "bdev_ftl_get_stats", 00:05:30.420 "bdev_ftl_unmap", 00:05:30.420 "bdev_ftl_unload", 00:05:30.420 "bdev_ftl_delete", 00:05:30.420 "bdev_ftl_load", 00:05:30.420 "bdev_ftl_create", 00:05:30.420 "bdev_virtio_attach_controller", 00:05:30.420 "bdev_virtio_scsi_get_devices", 00:05:30.420 "bdev_virtio_detach_controller", 00:05:30.420 "bdev_virtio_blk_set_hotplug", 00:05:30.420 "bdev_iscsi_delete", 00:05:30.420 "bdev_iscsi_create", 00:05:30.420 "bdev_iscsi_set_options", 00:05:30.420 "accel_error_inject_error", 00:05:30.420 "ioat_scan_accel_module", 00:05:30.420 "dsa_scan_accel_module", 00:05:30.420 "iaa_scan_accel_module", 00:05:30.420 "vfu_virtio_create_scsi_endpoint", 00:05:30.420 "vfu_virtio_scsi_remove_target", 00:05:30.420 "vfu_virtio_scsi_add_target", 00:05:30.420 "vfu_virtio_create_blk_endpoint", 00:05:30.420 "vfu_virtio_delete_endpoint", 00:05:30.420 "iscsi_set_options", 00:05:30.420 "iscsi_get_auth_groups", 00:05:30.420 "iscsi_auth_group_remove_secret", 00:05:30.420 "iscsi_auth_group_add_secret", 00:05:30.420 "iscsi_delete_auth_group", 00:05:30.420 "iscsi_create_auth_group", 00:05:30.420 "iscsi_set_discovery_auth", 00:05:30.420 "iscsi_get_options", 00:05:30.420 "iscsi_target_node_request_logout", 00:05:30.420 "iscsi_target_node_set_redirect", 00:05:30.420 "iscsi_target_node_set_auth", 00:05:30.420 "iscsi_target_node_add_lun", 00:05:30.420 "iscsi_get_connections", 00:05:30.420 "iscsi_portal_group_set_auth", 00:05:30.420 "iscsi_start_portal_group", 00:05:30.420 "iscsi_delete_portal_group", 00:05:30.420 "iscsi_create_portal_group", 00:05:30.420 "iscsi_get_portal_groups", 00:05:30.420 "iscsi_delete_target_node", 00:05:30.420 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.420 "iscsi_target_node_add_pg_ig_maps", 00:05:30.420 "iscsi_create_target_node", 00:05:30.420 "iscsi_get_target_nodes", 00:05:30.420 "iscsi_delete_initiator_group", 00:05:30.420 "iscsi_initiator_group_remove_initiators", 00:05:30.420 "iscsi_initiator_group_add_initiators", 00:05:30.420 "iscsi_create_initiator_group", 00:05:30.420 "iscsi_get_initiator_groups", 00:05:30.420 "nvmf_set_crdt", 00:05:30.420 "nvmf_set_config", 00:05:30.420 "nvmf_set_max_subsystems", 00:05:30.420 "nvmf_subsystem_get_listeners", 00:05:30.420 "nvmf_subsystem_get_qpairs", 00:05:30.420 "nvmf_subsystem_get_controllers", 00:05:30.420 "nvmf_get_stats", 00:05:30.420 "nvmf_get_transports", 00:05:30.420 "nvmf_create_transport", 00:05:30.420 "nvmf_get_targets", 00:05:30.420 "nvmf_delete_target", 00:05:30.420 "nvmf_create_target", 00:05:30.420 "nvmf_subsystem_allow_any_host", 00:05:30.420 "nvmf_subsystem_remove_host", 00:05:30.420 "nvmf_subsystem_add_host", 00:05:30.420 "nvmf_subsystem_remove_ns", 00:05:30.420 "nvmf_subsystem_add_ns", 00:05:30.420 "nvmf_subsystem_listener_set_ana_state", 00:05:30.420 "nvmf_discovery_get_referrals", 00:05:30.420 "nvmf_discovery_remove_referral", 00:05:30.420 "nvmf_discovery_add_referral", 00:05:30.420 "nvmf_subsystem_remove_listener", 00:05:30.420 "nvmf_subsystem_add_listener", 00:05:30.420 "nvmf_delete_subsystem", 00:05:30.420 "nvmf_create_subsystem", 00:05:30.420 "nvmf_get_subsystems", 00:05:30.420 "env_dpdk_get_mem_stats", 00:05:30.420 "nbd_get_disks", 00:05:30.420 "nbd_stop_disk", 00:05:30.420 "nbd_start_disk", 00:05:30.420 "ublk_recover_disk", 00:05:30.420 "ublk_get_disks", 00:05:30.420 "ublk_stop_disk", 00:05:30.420 "ublk_start_disk", 00:05:30.420 "ublk_destroy_target", 00:05:30.420 "ublk_create_target", 00:05:30.420 "virtio_blk_create_transport", 00:05:30.420 "virtio_blk_get_transports", 00:05:30.420 "vhost_controller_set_coalescing", 00:05:30.420 "vhost_get_controllers", 00:05:30.420 "vhost_delete_controller", 00:05:30.420 "vhost_create_blk_controller", 00:05:30.420 "vhost_scsi_controller_remove_target", 00:05:30.420 "vhost_scsi_controller_add_target", 00:05:30.420 "vhost_start_scsi_controller", 00:05:30.420 "vhost_create_scsi_controller", 00:05:30.420 "thread_set_cpumask", 00:05:30.420 "framework_get_scheduler", 00:05:30.420 "framework_set_scheduler", 00:05:30.420 "framework_get_reactors", 00:05:30.420 "thread_get_io_channels", 00:05:30.420 "thread_get_pollers", 00:05:30.420 "thread_get_stats", 00:05:30.420 "framework_monitor_context_switch", 00:05:30.420 "spdk_kill_instance", 00:05:30.420 "log_enable_timestamps", 00:05:30.420 "log_get_flags", 00:05:30.420 "log_clear_flag", 00:05:30.420 "log_set_flag", 00:05:30.420 "log_get_level", 00:05:30.420 "log_set_level", 00:05:30.420 "log_get_print_level", 00:05:30.420 "log_set_print_level", 00:05:30.420 "framework_enable_cpumask_locks", 00:05:30.420 "framework_disable_cpumask_locks", 00:05:30.420 "framework_wait_init", 00:05:30.420 "framework_start_init", 00:05:30.420 "scsi_get_devices", 00:05:30.420 "bdev_get_histogram", 00:05:30.420 "bdev_enable_histogram", 00:05:30.420 "bdev_set_qos_limit", 00:05:30.420 "bdev_set_qd_sampling_period", 00:05:30.420 "bdev_get_bdevs", 00:05:30.420 "bdev_reset_iostat", 00:05:30.420 "bdev_get_iostat", 00:05:30.420 "bdev_examine", 00:05:30.420 "bdev_wait_for_examine", 00:05:30.420 "bdev_set_options", 00:05:30.420 "notify_get_notifications", 00:05:30.420 "notify_get_types", 00:05:30.420 "accel_get_stats", 00:05:30.420 "accel_set_options", 00:05:30.420 "accel_set_driver", 00:05:30.420 "accel_crypto_key_destroy", 00:05:30.420 "accel_crypto_keys_get", 00:05:30.420 "accel_crypto_key_create", 00:05:30.420 "accel_assign_opc", 00:05:30.420 "accel_get_module_info", 00:05:30.420 "accel_get_opc_assignments", 00:05:30.420 "vmd_rescan", 00:05:30.420 "vmd_remove_device", 00:05:30.420 "vmd_enable", 00:05:30.420 "sock_set_default_impl", 00:05:30.420 "sock_impl_set_options", 00:05:30.420 "sock_impl_get_options", 00:05:30.420 "iobuf_get_stats", 00:05:30.420 "iobuf_set_options", 00:05:30.420 "framework_get_pci_devices", 00:05:30.420 "framework_get_config", 00:05:30.420 "framework_get_subsystems", 00:05:30.420 "vfu_tgt_set_base_path", 00:05:30.420 "trace_get_info", 00:05:30.420 "trace_get_tpoint_group_mask", 00:05:30.420 "trace_disable_tpoint_group", 00:05:30.420 "trace_enable_tpoint_group", 00:05:30.420 "trace_clear_tpoint_mask", 00:05:30.420 "trace_set_tpoint_mask", 00:05:30.420 "spdk_get_version", 00:05:30.420 "rpc_get_methods" 00:05:30.420 ] 00:05:30.420 11:59:43 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.420 11:59:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.420 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:30.420 11:59:43 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.420 11:59:43 -- spdkcli/tcp.sh@38 -- # killprocess 1264202 00:05:30.420 11:59:43 -- common/autotest_common.sh@926 -- # '[' -z 1264202 ']' 00:05:30.420 11:59:43 -- common/autotest_common.sh@930 -- # kill -0 1264202 00:05:30.420 11:59:43 -- common/autotest_common.sh@931 -- # uname 00:05:30.420 11:59:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:30.420 11:59:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1264202 00:05:30.420 11:59:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:30.420 11:59:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:30.420 11:59:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1264202' 00:05:30.420 killing process with pid 1264202 00:05:30.420 11:59:43 -- common/autotest_common.sh@945 -- # kill 1264202 00:05:30.420 11:59:43 -- common/autotest_common.sh@950 -- # wait 1264202 00:05:30.683 00:05:30.683 real 0m1.353s 00:05:30.683 user 0m2.537s 00:05:30.683 sys 0m0.400s 00:05:30.683 11:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.683 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:30.683 ************************************ 00:05:30.683 END TEST spdkcli_tcp 00:05:30.683 ************************************ 00:05:30.683 11:59:43 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.683 11:59:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.683 11:59:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.683 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:30.683 ************************************ 00:05:30.683 START TEST dpdk_mem_utility 00:05:30.683 ************************************ 00:05:30.683 11:59:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.683 * Looking for test storage... 00:05:30.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:30.683 11:59:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.683 11:59:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1264558 00:05:30.683 11:59:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1264558 00:05:30.683 11:59:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.683 11:59:43 -- common/autotest_common.sh@819 -- # '[' -z 1264558 ']' 00:05:30.683 11:59:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.683 11:59:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.683 11:59:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.683 11:59:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.683 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:05:30.944 [2024-06-11 11:59:43.725321] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:30.944 [2024-06-11 11:59:43.725393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264558 ] 00:05:30.944 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.944 [2024-06-11 11:59:43.792657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.944 [2024-06-11 11:59:43.828777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.944 [2024-06-11 11:59:43.828944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.515 11:59:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.515 11:59:44 -- common/autotest_common.sh@852 -- # return 0 00:05:31.515 11:59:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.515 11:59:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.515 11:59:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.515 11:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:31.515 { 00:05:31.515 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.515 } 00:05:31.515 11:59:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.515 11:59:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.515 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:31.515 1 heaps totaling size 814.000000 MiB 00:05:31.515 size: 814.000000 MiB heap id: 0 00:05:31.515 end heaps---------- 00:05:31.515 8 mempools totaling size 598.116089 MiB 00:05:31.515 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.515 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.515 size: 84.521057 MiB name: bdev_io_1264558 00:05:31.515 size: 51.011292 MiB name: evtpool_1264558 00:05:31.515 size: 50.003479 MiB name: msgpool_1264558 00:05:31.516 size: 21.763794 MiB name: PDU_Pool 00:05:31.516 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.516 size: 0.026123 MiB name: Session_Pool 00:05:31.516 end mempools------- 00:05:31.516 6 memzones totaling size 4.142822 MiB 00:05:31.516 size: 1.000366 MiB name: RG_ring_0_1264558 00:05:31.516 size: 1.000366 MiB name: RG_ring_1_1264558 00:05:31.516 size: 1.000366 MiB name: RG_ring_4_1264558 00:05:31.516 size: 1.000366 MiB name: RG_ring_5_1264558 00:05:31.516 size: 0.125366 MiB name: RG_ring_2_1264558 00:05:31.516 size: 0.015991 MiB name: RG_ring_3_1264558 00:05:31.516 end memzones------- 00:05:31.516 11:59:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:31.777 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:31.777 list of free elements. size: 12.519348 MiB 00:05:31.777 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:31.777 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:31.777 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:31.777 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:31.777 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:31.777 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:31.777 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:31.777 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:31.777 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:31.777 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:31.777 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:31.777 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:31.777 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:31.777 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:31.777 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:31.777 list of standard malloc elements. size: 199.218079 MiB 00:05:31.777 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:31.777 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:31.777 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:31.777 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:31.777 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:31.777 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:31.777 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:31.777 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:31.777 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:31.777 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:31.777 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:31.777 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:31.777 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:31.777 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:31.777 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:31.777 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:31.777 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:31.777 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:31.777 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:31.777 list of memzone associated elements. size: 602.262573 MiB 00:05:31.777 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:31.777 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:31.777 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:31.777 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:31.777 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:31.777 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1264558_0 00:05:31.777 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:31.777 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1264558_0 00:05:31.777 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:31.777 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1264558_0 00:05:31.777 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:31.777 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:31.777 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:31.777 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:31.777 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:31.777 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1264558 00:05:31.778 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:31.778 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1264558 00:05:31.778 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:31.778 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1264558 00:05:31.778 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:31.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:31.778 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:31.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:31.778 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:31.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:31.778 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:31.778 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:31.778 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:31.778 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1264558 00:05:31.778 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:31.778 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1264558 00:05:31.778 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:31.778 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1264558 00:05:31.778 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:31.778 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1264558 00:05:31.778 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:31.778 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1264558 00:05:31.778 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:31.778 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:31.778 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:31.778 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:31.778 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:31.778 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:31.778 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:31.778 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1264558 00:05:31.778 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:31.778 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:31.778 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:31.778 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:31.778 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:31.778 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1264558 00:05:31.778 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:31.778 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:31.778 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:31.778 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1264558 00:05:31.778 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:31.778 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1264558 00:05:31.778 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:31.778 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:31.778 11:59:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:31.778 11:59:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1264558 00:05:31.778 11:59:44 -- common/autotest_common.sh@926 -- # '[' -z 1264558 ']' 00:05:31.778 11:59:44 -- common/autotest_common.sh@930 -- # kill -0 1264558 00:05:31.778 11:59:44 -- common/autotest_common.sh@931 -- # uname 00:05:31.778 11:59:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.778 11:59:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1264558 00:05:31.778 11:59:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.778 11:59:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.778 11:59:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1264558' 00:05:31.778 killing process with pid 1264558 00:05:31.778 11:59:44 -- common/autotest_common.sh@945 -- # kill 1264558 00:05:31.778 11:59:44 -- common/autotest_common.sh@950 -- # wait 1264558 00:05:32.039 00:05:32.039 real 0m1.265s 00:05:32.039 user 0m1.325s 00:05:32.039 sys 0m0.391s 00:05:32.039 11:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.039 11:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.039 ************************************ 00:05:32.039 END TEST dpdk_mem_utility 00:05:32.039 ************************************ 00:05:32.039 11:59:44 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.039 11:59:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.039 11:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.039 11:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.039 ************************************ 00:05:32.039 START TEST event 00:05:32.039 ************************************ 00:05:32.039 11:59:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.039 * Looking for test storage... 00:05:32.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.039 11:59:44 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.039 11:59:44 -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.039 11:59:44 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.039 11:59:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:32.039 11:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.039 11:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.039 ************************************ 00:05:32.039 START TEST event_perf 00:05:32.039 ************************************ 00:05:32.039 11:59:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.039 Running I/O for 1 seconds...[2024-06-11 11:59:45.010269] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:32.039 [2024-06-11 11:59:45.010371] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264766 ] 00:05:32.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.301 [2024-06-11 11:59:45.081194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.301 [2024-06-11 11:59:45.120346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.301 [2024-06-11 11:59:45.120448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.301 [2024-06-11 11:59:45.120605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.301 [2024-06-11 11:59:45.120606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.242 Running I/O for 1 seconds... 00:05:33.242 lcore 0: 170811 00:05:33.242 lcore 1: 170807 00:05:33.242 lcore 2: 170808 00:05:33.242 lcore 3: 170811 00:05:33.242 done. 00:05:33.242 00:05:33.242 real 0m1.171s 00:05:33.242 user 0m4.083s 00:05:33.242 sys 0m0.086s 00:05:33.242 11:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.242 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.242 ************************************ 00:05:33.242 END TEST event_perf 00:05:33.242 ************************************ 00:05:33.242 11:59:46 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.242 11:59:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:33.242 11:59:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.242 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.242 ************************************ 00:05:33.242 START TEST event_reactor 00:05:33.242 ************************************ 00:05:33.242 11:59:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.242 [2024-06-11 11:59:46.224253] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:33.242 [2024-06-11 11:59:46.224345] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265032 ] 00:05:33.242 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.502 [2024-06-11 11:59:46.287190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.502 [2024-06-11 11:59:46.313838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.443 test_start 00:05:34.443 oneshot 00:05:34.443 tick 100 00:05:34.443 tick 100 00:05:34.443 tick 250 00:05:34.443 tick 100 00:05:34.443 tick 100 00:05:34.443 tick 100 00:05:34.443 tick 250 00:05:34.443 tick 500 00:05:34.443 tick 100 00:05:34.443 tick 100 00:05:34.443 tick 250 00:05:34.443 tick 100 00:05:34.443 tick 100 00:05:34.443 test_end 00:05:34.443 00:05:34.443 real 0m1.148s 00:05:34.443 user 0m1.080s 00:05:34.443 sys 0m0.063s 00:05:34.443 11:59:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.443 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.443 ************************************ 00:05:34.443 END TEST event_reactor 00:05:34.443 ************************************ 00:05:34.443 11:59:47 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.443 11:59:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:34.443 11:59:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.443 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.443 ************************************ 00:05:34.443 START TEST event_reactor_perf 00:05:34.443 ************************************ 00:05:34.443 11:59:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.443 [2024-06-11 11:59:47.415258] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:34.443 [2024-06-11 11:59:47.415350] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265385 ] 00:05:34.443 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.704 [2024-06-11 11:59:47.479203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.704 [2024-06-11 11:59:47.506119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.646 test_start 00:05:35.646 test_end 00:05:35.646 Performance: 367919 events per second 00:05:35.646 00:05:35.646 real 0m1.150s 00:05:35.646 user 0m1.076s 00:05:35.646 sys 0m0.071s 00:05:35.646 11:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.646 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:35.646 ************************************ 00:05:35.646 END TEST event_reactor_perf 00:05:35.646 ************************************ 00:05:35.646 11:59:48 -- event/event.sh@49 -- # uname -s 00:05:35.646 11:59:48 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.646 11:59:48 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.646 11:59:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.646 11:59:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.646 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:35.646 ************************************ 00:05:35.646 START TEST event_scheduler 00:05:35.646 ************************************ 00:05:35.646 11:59:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.646 * Looking for test storage... 00:05:35.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:35.907 11:59:48 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.907 11:59:48 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1265721 00:05:35.907 11:59:48 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.907 11:59:48 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.907 11:59:48 -- scheduler/scheduler.sh@37 -- # waitforlisten 1265721 00:05:35.907 11:59:48 -- common/autotest_common.sh@819 -- # '[' -z 1265721 ']' 00:05:35.907 11:59:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.907 11:59:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.907 11:59:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.907 11:59:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.908 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:05:35.908 [2024-06-11 11:59:48.745041] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:35.908 [2024-06-11 11:59:48.745115] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265721 ] 00:05:35.908 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.908 [2024-06-11 11:59:48.798012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.908 [2024-06-11 11:59:48.827902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.908 [2024-06-11 11:59:48.828063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.908 [2024-06-11 11:59:48.828228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.908 [2024-06-11 11:59:48.828229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.480 11:59:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.480 11:59:49 -- common/autotest_common.sh@852 -- # return 0 00:05:36.480 11:59:49 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.480 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.480 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.480 POWER: Env isn't set yet! 00:05:36.480 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:36.480 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.480 POWER: Cannot set governor of lcore 0 to userspace 00:05:36.480 POWER: Attempting to initialise PSTAT power management... 00:05:36.741 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:36.741 POWER: Initialized successfully for lcore 0 power management 00:05:36.741 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:36.741 POWER: Initialized successfully for lcore 1 power management 00:05:36.741 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:36.741 POWER: Initialized successfully for lcore 2 power management 00:05:36.741 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:36.741 POWER: Initialized successfully for lcore 3 power management 00:05:36.741 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.741 11:59:49 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.741 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.741 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 [2024-06-11 11:59:49.598378] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:36.741 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.741 11:59:49 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:36.741 11:59:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.741 11:59:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.741 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 ************************************ 00:05:36.741 START TEST scheduler_create_thread 00:05:36.741 ************************************ 00:05:36.741 11:59:49 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:36.741 11:59:49 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:36.741 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.741 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 2 00:05:36.741 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.741 11:59:49 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:36.741 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.741 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 3 00:05:36.741 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.741 11:59:49 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:36.741 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.741 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 4 00:05:36.741 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.741 11:59:49 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:36.741 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.741 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 5 00:05:36.742 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.742 11:59:49 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:36.742 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.742 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.742 6 00:05:36.742 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.742 11:59:49 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:36.742 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.742 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.742 7 00:05:36.742 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.742 11:59:49 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:36.742 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.742 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.742 8 00:05:36.742 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.742 11:59:49 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:36.742 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.742 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.742 9 00:05:36.742 11:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.742 11:59:49 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:36.742 11:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.742 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:05:38.129 10 00:05:38.129 11:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.129 11:59:50 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:38.130 11:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.130 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.516 11:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.516 11:59:52 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:39.516 11:59:52 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:39.516 11:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.516 11:59:52 -- common/autotest_common.sh@10 -- # set +x 00:05:40.088 11:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.088 11:59:52 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:40.088 11:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.088 11:59:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.030 11:59:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.030 11:59:53 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:41.030 11:59:53 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:41.031 11:59:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.031 11:59:53 -- common/autotest_common.sh@10 -- # set +x 00:05:41.600 11:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.600 00:05:41.600 real 0m4.797s 00:05:41.600 user 0m0.023s 00:05:41.600 sys 0m0.008s 00:05:41.600 11:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.600 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.600 ************************************ 00:05:41.600 END TEST scheduler_create_thread 00:05:41.600 ************************************ 00:05:41.600 11:59:54 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:41.600 11:59:54 -- scheduler/scheduler.sh@46 -- # killprocess 1265721 00:05:41.600 11:59:54 -- common/autotest_common.sh@926 -- # '[' -z 1265721 ']' 00:05:41.600 11:59:54 -- common/autotest_common.sh@930 -- # kill -0 1265721 00:05:41.600 11:59:54 -- common/autotest_common.sh@931 -- # uname 00:05:41.600 11:59:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.600 11:59:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1265721 00:05:41.600 11:59:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:41.600 11:59:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:41.600 11:59:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1265721' 00:05:41.600 killing process with pid 1265721 00:05:41.600 11:59:54 -- common/autotest_common.sh@945 -- # kill 1265721 00:05:41.600 11:59:54 -- common/autotest_common.sh@950 -- # wait 1265721 00:05:41.861 [2024-06-11 11:59:54.684358] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.861 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:41.861 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:41.861 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:41.861 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:41.861 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:41.861 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:41.861 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:41.861 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:41.861 00:05:41.861 real 0m6.243s 00:05:41.861 user 0m14.115s 00:05:41.861 sys 0m0.327s 00:05:41.861 11:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.861 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.861 ************************************ 00:05:41.861 END TEST event_scheduler 00:05:41.861 ************************************ 00:05:41.861 11:59:54 -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.861 11:59:54 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.861 11:59:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.861 11:59:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.861 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.861 ************************************ 00:05:41.861 START TEST app_repeat 00:05:41.861 ************************************ 00:05:41.861 11:59:54 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:41.861 11:59:54 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.861 11:59:54 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.861 11:59:54 -- event/event.sh@13 -- # local nbd_list 00:05:41.861 11:59:54 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.861 11:59:54 -- event/event.sh@14 -- # local bdev_list 00:05:41.861 11:59:54 -- event/event.sh@15 -- # local repeat_times=4 00:05:41.861 11:59:54 -- event/event.sh@17 -- # modprobe nbd 00:05:42.121 11:59:54 -- event/event.sh@19 -- # repeat_pid=1266859 00:05:42.121 11:59:54 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.121 11:59:54 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:42.121 11:59:54 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1266859' 00:05:42.121 Process app_repeat pid: 1266859 00:05:42.121 11:59:54 -- event/event.sh@23 -- # for i in {0..2} 00:05:42.121 11:59:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:42.121 spdk_app_start Round 0 00:05:42.121 11:59:54 -- event/event.sh@25 -- # waitforlisten 1266859 /var/tmp/spdk-nbd.sock 00:05:42.121 11:59:54 -- common/autotest_common.sh@819 -- # '[' -z 1266859 ']' 00:05:42.121 11:59:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.121 11:59:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.121 11:59:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.121 11:59:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.121 11:59:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.121 [2024-06-11 11:59:54.926321] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:42.121 [2024-06-11 11:59:54.926405] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266859 ] 00:05:42.121 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.121 [2024-06-11 11:59:54.991997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.121 [2024-06-11 11:59:55.025354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.121 [2024-06-11 11:59:55.025445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.691 11:59:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.691 11:59:55 -- common/autotest_common.sh@852 -- # return 0 00:05:42.691 11:59:55 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.952 Malloc0 00:05:42.952 11:59:55 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.213 Malloc1 00:05:43.213 11:59:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@12 -- # local i 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.213 /dev/nbd0 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.213 11:59:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:43.213 11:59:56 -- common/autotest_common.sh@857 -- # local i 00:05:43.213 11:59:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:43.213 11:59:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:43.213 11:59:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:43.213 11:59:56 -- common/autotest_common.sh@861 -- # break 00:05:43.213 11:59:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:43.213 11:59:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:43.213 11:59:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.213 1+0 records in 00:05:43.213 1+0 records out 00:05:43.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176812 s, 23.2 MB/s 00:05:43.213 11:59:56 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.213 11:59:56 -- common/autotest_common.sh@874 -- # size=4096 00:05:43.213 11:59:56 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.213 11:59:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:43.213 11:59:56 -- common/autotest_common.sh@877 -- # return 0 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.213 11:59:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.473 /dev/nbd1 00:05:43.473 11:59:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.473 11:59:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.473 11:59:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:43.473 11:59:56 -- common/autotest_common.sh@857 -- # local i 00:05:43.473 11:59:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:43.473 11:59:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:43.473 11:59:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:43.473 11:59:56 -- common/autotest_common.sh@861 -- # break 00:05:43.473 11:59:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:43.473 11:59:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:43.473 11:59:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.473 1+0 records in 00:05:43.473 1+0 records out 00:05:43.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216615 s, 18.9 MB/s 00:05:43.473 11:59:56 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.473 11:59:56 -- common/autotest_common.sh@874 -- # size=4096 00:05:43.473 11:59:56 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.473 11:59:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:43.473 11:59:56 -- common/autotest_common.sh@877 -- # return 0 00:05:43.473 11:59:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.473 11:59:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.473 11:59:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.473 11:59:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.473 11:59:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.733 11:59:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.733 { 00:05:43.733 "nbd_device": "/dev/nbd0", 00:05:43.733 "bdev_name": "Malloc0" 00:05:43.733 }, 00:05:43.733 { 00:05:43.733 "nbd_device": "/dev/nbd1", 00:05:43.733 "bdev_name": "Malloc1" 00:05:43.733 } 00:05:43.733 ]' 00:05:43.733 11:59:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.733 { 00:05:43.733 "nbd_device": "/dev/nbd0", 00:05:43.733 "bdev_name": "Malloc0" 00:05:43.733 }, 00:05:43.733 { 00:05:43.733 "nbd_device": "/dev/nbd1", 00:05:43.733 "bdev_name": "Malloc1" 00:05:43.733 } 00:05:43.733 ]' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.734 /dev/nbd1' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.734 /dev/nbd1' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.734 256+0 records in 00:05:43.734 256+0 records out 00:05:43.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119904 s, 87.5 MB/s 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.734 256+0 records in 00:05:43.734 256+0 records out 00:05:43.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159175 s, 65.9 MB/s 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.734 256+0 records in 00:05:43.734 256+0 records out 00:05:43.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165468 s, 63.4 MB/s 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@51 -- # local i 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.734 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.995 11:59:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@65 -- # true 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.256 11:59:57 -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.256 11:59:57 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.516 11:59:57 -- event/event.sh@35 -- # sleep 3 00:05:44.516 [2024-06-11 11:59:57.467141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.516 [2024-06-11 11:59:57.494184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.516 [2024-06-11 11:59:57.494188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.516 [2024-06-11 11:59:57.525651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.517 [2024-06-11 11:59:57.525686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.815 12:00:00 -- event/event.sh@23 -- # for i in {0..2} 00:05:47.815 12:00:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.815 spdk_app_start Round 1 00:05:47.815 12:00:00 -- event/event.sh@25 -- # waitforlisten 1266859 /var/tmp/spdk-nbd.sock 00:05:47.815 12:00:00 -- common/autotest_common.sh@819 -- # '[' -z 1266859 ']' 00:05:47.815 12:00:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.815 12:00:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.815 12:00:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.815 12:00:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.815 12:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.815 12:00:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.815 12:00:00 -- common/autotest_common.sh@852 -- # return 0 00:05:47.815 12:00:00 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.815 Malloc0 00:05:47.815 12:00:00 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.815 Malloc1 00:05:47.815 12:00:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@12 -- # local i 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.815 12:00:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.075 /dev/nbd0 00:05:48.075 12:00:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.075 12:00:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.075 12:00:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:48.075 12:00:00 -- common/autotest_common.sh@857 -- # local i 00:05:48.075 12:00:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:48.075 12:00:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:48.075 12:00:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:48.075 12:00:00 -- common/autotest_common.sh@861 -- # break 00:05:48.075 12:00:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:48.075 12:00:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:48.075 12:00:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.075 1+0 records in 00:05:48.075 1+0 records out 00:05:48.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211194 s, 19.4 MB/s 00:05:48.075 12:00:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.075 12:00:00 -- common/autotest_common.sh@874 -- # size=4096 00:05:48.075 12:00:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.075 12:00:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:48.075 12:00:00 -- common/autotest_common.sh@877 -- # return 0 00:05:48.075 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.075 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.075 12:00:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.336 /dev/nbd1 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.336 12:00:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:48.336 12:00:01 -- common/autotest_common.sh@857 -- # local i 00:05:48.336 12:00:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:48.336 12:00:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:48.336 12:00:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:48.336 12:00:01 -- common/autotest_common.sh@861 -- # break 00:05:48.336 12:00:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:48.336 12:00:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:48.336 12:00:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.336 1+0 records in 00:05:48.336 1+0 records out 00:05:48.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286848 s, 14.3 MB/s 00:05:48.336 12:00:01 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.336 12:00:01 -- common/autotest_common.sh@874 -- # size=4096 00:05:48.336 12:00:01 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.336 12:00:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:48.336 12:00:01 -- common/autotest_common.sh@877 -- # return 0 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.336 { 00:05:48.336 "nbd_device": "/dev/nbd0", 00:05:48.336 "bdev_name": "Malloc0" 00:05:48.336 }, 00:05:48.336 { 00:05:48.336 "nbd_device": "/dev/nbd1", 00:05:48.336 "bdev_name": "Malloc1" 00:05:48.336 } 00:05:48.336 ]' 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.336 { 00:05:48.336 "nbd_device": "/dev/nbd0", 00:05:48.336 "bdev_name": "Malloc0" 00:05:48.336 }, 00:05:48.336 { 00:05:48.336 "nbd_device": "/dev/nbd1", 00:05:48.336 "bdev_name": "Malloc1" 00:05:48.336 } 00:05:48.336 ]' 00:05:48.336 12:00:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.597 /dev/nbd1' 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.597 /dev/nbd1' 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.597 12:00:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.598 256+0 records in 00:05:48.598 256+0 records out 00:05:48.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118487 s, 88.5 MB/s 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.598 256+0 records in 00:05:48.598 256+0 records out 00:05:48.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155319 s, 67.5 MB/s 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.598 256+0 records in 00:05:48.598 256+0 records out 00:05:48.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167677 s, 62.5 MB/s 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@51 -- # local i 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@41 -- # break 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.598 12:00:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@41 -- # break 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.859 12:00:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@65 -- # true 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.119 12:00:01 -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.119 12:00:01 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.119 12:00:02 -- event/event.sh@35 -- # sleep 3 00:05:49.379 [2024-06-11 12:00:02.263606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.379 [2024-06-11 12:00:02.290748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.379 [2024-06-11 12:00:02.290750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.379 [2024-06-11 12:00:02.322150] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.379 [2024-06-11 12:00:02.322187] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.680 12:00:05 -- event/event.sh@23 -- # for i in {0..2} 00:05:52.680 12:00:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:52.680 spdk_app_start Round 2 00:05:52.680 12:00:05 -- event/event.sh@25 -- # waitforlisten 1266859 /var/tmp/spdk-nbd.sock 00:05:52.680 12:00:05 -- common/autotest_common.sh@819 -- # '[' -z 1266859 ']' 00:05:52.680 12:00:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.680 12:00:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.680 12:00:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.680 12:00:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.680 12:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.680 12:00:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.680 12:00:05 -- common/autotest_common.sh@852 -- # return 0 00:05:52.680 12:00:05 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.680 Malloc0 00:05:52.680 12:00:05 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.680 Malloc1 00:05:52.680 12:00:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@12 -- # local i 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.680 12:00:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.941 /dev/nbd0 00:05:52.941 12:00:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.941 12:00:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.941 12:00:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:52.941 12:00:05 -- common/autotest_common.sh@857 -- # local i 00:05:52.941 12:00:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:52.941 12:00:05 -- common/autotest_common.sh@861 -- # break 00:05:52.941 12:00:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.941 1+0 records in 00:05:52.941 1+0 records out 00:05:52.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000130928 s, 31.3 MB/s 00:05:52.941 12:00:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.941 12:00:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:52.941 12:00:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.941 12:00:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:52.941 12:00:05 -- common/autotest_common.sh@877 -- # return 0 00:05:52.941 12:00:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.941 12:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.941 12:00:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.941 /dev/nbd1 00:05:52.941 12:00:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.941 12:00:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.941 12:00:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:52.941 12:00:05 -- common/autotest_common.sh@857 -- # local i 00:05:52.941 12:00:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:52.941 12:00:05 -- common/autotest_common.sh@861 -- # break 00:05:52.941 12:00:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:52.941 12:00:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.941 1+0 records in 00:05:52.941 1+0 records out 00:05:52.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269736 s, 15.2 MB/s 00:05:52.941 12:00:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.202 12:00:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:53.202 12:00:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.202 12:00:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:53.202 12:00:05 -- common/autotest_common.sh@877 -- # return 0 00:05:53.202 12:00:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.202 12:00:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.202 12:00:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.202 12:00:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.202 12:00:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.202 { 00:05:53.202 "nbd_device": "/dev/nbd0", 00:05:53.202 "bdev_name": "Malloc0" 00:05:53.202 }, 00:05:53.202 { 00:05:53.202 "nbd_device": "/dev/nbd1", 00:05:53.202 "bdev_name": "Malloc1" 00:05:53.202 } 00:05:53.202 ]' 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.202 { 00:05:53.202 "nbd_device": "/dev/nbd0", 00:05:53.202 "bdev_name": "Malloc0" 00:05:53.202 }, 00:05:53.202 { 00:05:53.202 "nbd_device": "/dev/nbd1", 00:05:53.202 "bdev_name": "Malloc1" 00:05:53.202 } 00:05:53.202 ]' 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.202 /dev/nbd1' 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.202 /dev/nbd1' 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.202 256+0 records in 00:05:53.202 256+0 records out 00:05:53.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118508 s, 88.5 MB/s 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.202 256+0 records in 00:05:53.202 256+0 records out 00:05:53.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165871 s, 63.2 MB/s 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.202 12:00:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.463 256+0 records in 00:05:53.463 256+0 records out 00:05:53.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01671 s, 62.8 MB/s 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@51 -- # local i 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@41 -- # break 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.463 12:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@41 -- # break 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.722 12:00:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@65 -- # true 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.981 12:00:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.981 12:00:06 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.981 12:00:06 -- event/event.sh@35 -- # sleep 3 00:05:54.242 [2024-06-11 12:00:07.044344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.242 [2024-06-11 12:00:07.071687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.242 [2024-06-11 12:00:07.071689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.242 [2024-06-11 12:00:07.103217] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.242 [2024-06-11 12:00:07.103252] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.585 12:00:09 -- event/event.sh@38 -- # waitforlisten 1266859 /var/tmp/spdk-nbd.sock 00:05:57.585 12:00:09 -- common/autotest_common.sh@819 -- # '[' -z 1266859 ']' 00:05:57.585 12:00:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.585 12:00:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.585 12:00:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.585 12:00:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.585 12:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:57.585 12:00:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.585 12:00:10 -- common/autotest_common.sh@852 -- # return 0 00:05:57.585 12:00:10 -- event/event.sh@39 -- # killprocess 1266859 00:05:57.585 12:00:10 -- common/autotest_common.sh@926 -- # '[' -z 1266859 ']' 00:05:57.585 12:00:10 -- common/autotest_common.sh@930 -- # kill -0 1266859 00:05:57.585 12:00:10 -- common/autotest_common.sh@931 -- # uname 00:05:57.585 12:00:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.585 12:00:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1266859 00:05:57.585 12:00:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.585 12:00:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.585 12:00:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1266859' 00:05:57.585 killing process with pid 1266859 00:05:57.585 12:00:10 -- common/autotest_common.sh@945 -- # kill 1266859 00:05:57.585 12:00:10 -- common/autotest_common.sh@950 -- # wait 1266859 00:05:57.585 spdk_app_start is called in Round 0. 00:05:57.585 Shutdown signal received, stop current app iteration 00:05:57.585 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:57.585 spdk_app_start is called in Round 1. 00:05:57.585 Shutdown signal received, stop current app iteration 00:05:57.585 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:57.585 spdk_app_start is called in Round 2. 00:05:57.585 Shutdown signal received, stop current app iteration 00:05:57.585 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:57.585 spdk_app_start is called in Round 3. 00:05:57.585 Shutdown signal received, stop current app iteration 00:05:57.585 12:00:10 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:57.585 12:00:10 -- event/event.sh@42 -- # return 0 00:05:57.585 00:05:57.585 real 0m15.351s 00:05:57.585 user 0m33.264s 00:05:57.585 sys 0m2.103s 00:05:57.585 12:00:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.585 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:05:57.585 ************************************ 00:05:57.585 END TEST app_repeat 00:05:57.585 ************************************ 00:05:57.585 12:00:10 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:57.585 12:00:10 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.585 12:00:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.585 12:00:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.585 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:05:57.585 ************************************ 00:05:57.585 START TEST cpu_locks 00:05:57.585 ************************************ 00:05:57.585 12:00:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:57.585 * Looking for test storage... 00:05:57.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:57.585 12:00:10 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:57.585 12:00:10 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:57.585 12:00:10 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:57.585 12:00:10 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:57.585 12:00:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.585 12:00:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.585 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:05:57.585 ************************************ 00:05:57.585 START TEST default_locks 00:05:57.585 ************************************ 00:05:57.585 12:00:10 -- common/autotest_common.sh@1104 -- # default_locks 00:05:57.585 12:00:10 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1270850 00:05:57.585 12:00:10 -- event/cpu_locks.sh@47 -- # waitforlisten 1270850 00:05:57.585 12:00:10 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.585 12:00:10 -- common/autotest_common.sh@819 -- # '[' -z 1270850 ']' 00:05:57.585 12:00:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.585 12:00:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.585 12:00:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.585 12:00:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.585 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:05:57.585 [2024-06-11 12:00:10.436455] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:57.585 [2024-06-11 12:00:10.436516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270850 ] 00:05:57.585 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.585 [2024-06-11 12:00:10.490448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.585 [2024-06-11 12:00:10.519915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.585 [2024-06-11 12:00:10.520049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.214 12:00:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.214 12:00:11 -- common/autotest_common.sh@852 -- # return 0 00:05:58.214 12:00:11 -- event/cpu_locks.sh@49 -- # locks_exist 1270850 00:05:58.214 12:00:11 -- event/cpu_locks.sh@22 -- # lslocks -p 1270850 00:05:58.214 12:00:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.476 lslocks: write error 00:05:58.476 12:00:11 -- event/cpu_locks.sh@50 -- # killprocess 1270850 00:05:58.476 12:00:11 -- common/autotest_common.sh@926 -- # '[' -z 1270850 ']' 00:05:58.476 12:00:11 -- common/autotest_common.sh@930 -- # kill -0 1270850 00:05:58.476 12:00:11 -- common/autotest_common.sh@931 -- # uname 00:05:58.476 12:00:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:58.476 12:00:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1270850 00:05:58.476 12:00:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:58.476 12:00:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:58.476 12:00:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1270850' 00:05:58.476 killing process with pid 1270850 00:05:58.476 12:00:11 -- common/autotest_common.sh@945 -- # kill 1270850 00:05:58.476 12:00:11 -- common/autotest_common.sh@950 -- # wait 1270850 00:05:58.737 12:00:11 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1270850 00:05:58.737 12:00:11 -- common/autotest_common.sh@640 -- # local es=0 00:05:58.737 12:00:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1270850 00:05:58.737 12:00:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:58.737 12:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.737 12:00:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:58.737 12:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.737 12:00:11 -- common/autotest_common.sh@643 -- # waitforlisten 1270850 00:05:58.737 12:00:11 -- common/autotest_common.sh@819 -- # '[' -z 1270850 ']' 00:05:58.737 12:00:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.737 12:00:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.737 12:00:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.737 12:00:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.737 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1270850) - No such process 00:05:58.737 ERROR: process (pid: 1270850) is no longer running 00:05:58.737 12:00:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.737 12:00:11 -- common/autotest_common.sh@852 -- # return 1 00:05:58.737 12:00:11 -- common/autotest_common.sh@643 -- # es=1 00:05:58.737 12:00:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:58.737 12:00:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:58.737 12:00:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:58.737 12:00:11 -- event/cpu_locks.sh@54 -- # no_locks 00:05:58.737 12:00:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.737 12:00:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.737 12:00:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.737 00:05:58.737 real 0m1.301s 00:05:58.737 user 0m1.382s 00:05:58.737 sys 0m0.431s 00:05:58.737 12:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.737 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.737 ************************************ 00:05:58.737 END TEST default_locks 00:05:58.737 ************************************ 00:05:58.737 12:00:11 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:58.737 12:00:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.737 12:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.737 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.737 ************************************ 00:05:58.737 START TEST default_locks_via_rpc 00:05:58.737 ************************************ 00:05:58.737 12:00:11 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:58.737 12:00:11 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1271072 00:05:58.737 12:00:11 -- event/cpu_locks.sh@63 -- # waitforlisten 1271072 00:05:58.737 12:00:11 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.737 12:00:11 -- common/autotest_common.sh@819 -- # '[' -z 1271072 ']' 00:05:58.737 12:00:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.737 12:00:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.737 12:00:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.737 12:00:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.737 12:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.999 [2024-06-11 12:00:11.780093] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:58.999 [2024-06-11 12:00:11.780149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271072 ] 00:05:58.999 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.999 [2024-06-11 12:00:11.843188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.999 [2024-06-11 12:00:11.875713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.999 [2024-06-11 12:00:11.875860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.569 12:00:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.569 12:00:12 -- common/autotest_common.sh@852 -- # return 0 00:05:59.569 12:00:12 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:59.569 12:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.569 12:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.569 12:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.569 12:00:12 -- event/cpu_locks.sh@67 -- # no_locks 00:05:59.569 12:00:12 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.569 12:00:12 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.569 12:00:12 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.569 12:00:12 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.569 12:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.569 12:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.569 12:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.569 12:00:12 -- event/cpu_locks.sh@71 -- # locks_exist 1271072 00:05:59.569 12:00:12 -- event/cpu_locks.sh@22 -- # lslocks -p 1271072 00:05:59.569 12:00:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.139 12:00:12 -- event/cpu_locks.sh@73 -- # killprocess 1271072 00:06:00.139 12:00:12 -- common/autotest_common.sh@926 -- # '[' -z 1271072 ']' 00:06:00.139 12:00:12 -- common/autotest_common.sh@930 -- # kill -0 1271072 00:06:00.139 12:00:12 -- common/autotest_common.sh@931 -- # uname 00:06:00.139 12:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.139 12:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1271072 00:06:00.139 12:00:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.139 12:00:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.139 12:00:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1271072' 00:06:00.139 killing process with pid 1271072 00:06:00.139 12:00:13 -- common/autotest_common.sh@945 -- # kill 1271072 00:06:00.139 12:00:13 -- common/autotest_common.sh@950 -- # wait 1271072 00:06:00.400 00:06:00.400 real 0m1.478s 00:06:00.400 user 0m1.567s 00:06:00.400 sys 0m0.509s 00:06:00.400 12:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.400 12:00:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.400 ************************************ 00:06:00.400 END TEST default_locks_via_rpc 00:06:00.400 ************************************ 00:06:00.400 12:00:13 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:00.400 12:00:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.400 12:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.400 12:00:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.400 ************************************ 00:06:00.400 START TEST non_locking_app_on_locked_coremask 00:06:00.400 ************************************ 00:06:00.400 12:00:13 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:00.400 12:00:13 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1271418 00:06:00.400 12:00:13 -- event/cpu_locks.sh@81 -- # waitforlisten 1271418 /var/tmp/spdk.sock 00:06:00.400 12:00:13 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.400 12:00:13 -- common/autotest_common.sh@819 -- # '[' -z 1271418 ']' 00:06:00.400 12:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.400 12:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.400 12:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.400 12:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.400 12:00:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.400 [2024-06-11 12:00:13.310857] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:00.400 [2024-06-11 12:00:13.310922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271418 ] 00:06:00.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.400 [2024-06-11 12:00:13.373391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.400 [2024-06-11 12:00:13.405451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.400 [2024-06-11 12:00:13.405582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.341 12:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.341 12:00:14 -- common/autotest_common.sh@852 -- # return 0 00:06:01.341 12:00:14 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1271747 00:06:01.341 12:00:14 -- event/cpu_locks.sh@85 -- # waitforlisten 1271747 /var/tmp/spdk2.sock 00:06:01.341 12:00:14 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:01.341 12:00:14 -- common/autotest_common.sh@819 -- # '[' -z 1271747 ']' 00:06:01.341 12:00:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.341 12:00:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.341 12:00:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.341 12:00:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.341 12:00:14 -- common/autotest_common.sh@10 -- # set +x 00:06:01.341 [2024-06-11 12:00:14.110511] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:01.341 [2024-06-11 12:00:14.110563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271747 ] 00:06:01.341 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.341 [2024-06-11 12:00:14.200301] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.341 [2024-06-11 12:00:14.200328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.341 [2024-06-11 12:00:14.257108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.341 [2024-06-11 12:00:14.257238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.913 12:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.914 12:00:14 -- common/autotest_common.sh@852 -- # return 0 00:06:01.914 12:00:14 -- event/cpu_locks.sh@87 -- # locks_exist 1271418 00:06:01.914 12:00:14 -- event/cpu_locks.sh@22 -- # lslocks -p 1271418 00:06:01.914 12:00:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.488 lslocks: write error 00:06:02.488 12:00:15 -- event/cpu_locks.sh@89 -- # killprocess 1271418 00:06:02.488 12:00:15 -- common/autotest_common.sh@926 -- # '[' -z 1271418 ']' 00:06:02.488 12:00:15 -- common/autotest_common.sh@930 -- # kill -0 1271418 00:06:02.488 12:00:15 -- common/autotest_common.sh@931 -- # uname 00:06:02.488 12:00:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:02.488 12:00:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1271418 00:06:02.488 12:00:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:02.488 12:00:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:02.488 12:00:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1271418' 00:06:02.488 killing process with pid 1271418 00:06:02.488 12:00:15 -- common/autotest_common.sh@945 -- # kill 1271418 00:06:02.488 12:00:15 -- common/autotest_common.sh@950 -- # wait 1271418 00:06:03.060 12:00:15 -- event/cpu_locks.sh@90 -- # killprocess 1271747 00:06:03.060 12:00:15 -- common/autotest_common.sh@926 -- # '[' -z 1271747 ']' 00:06:03.060 12:00:15 -- common/autotest_common.sh@930 -- # kill -0 1271747 00:06:03.060 12:00:15 -- common/autotest_common.sh@931 -- # uname 00:06:03.060 12:00:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:03.060 12:00:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1271747 00:06:03.060 12:00:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:03.060 12:00:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:03.060 12:00:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1271747' 00:06:03.060 killing process with pid 1271747 00:06:03.060 12:00:15 -- common/autotest_common.sh@945 -- # kill 1271747 00:06:03.060 12:00:15 -- common/autotest_common.sh@950 -- # wait 1271747 00:06:03.060 00:06:03.060 real 0m2.823s 00:06:03.060 user 0m3.054s 00:06:03.060 sys 0m0.870s 00:06:03.060 12:00:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.060 12:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.060 ************************************ 00:06:03.060 END TEST non_locking_app_on_locked_coremask 00:06:03.060 ************************************ 00:06:03.322 12:00:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:03.322 12:00:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.322 12:00:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.322 12:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.322 ************************************ 00:06:03.322 START TEST locking_app_on_unlocked_coremask 00:06:03.322 ************************************ 00:06:03.322 12:00:16 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:03.322 12:00:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1272140 00:06:03.322 12:00:16 -- event/cpu_locks.sh@99 -- # waitforlisten 1272140 /var/tmp/spdk.sock 00:06:03.322 12:00:16 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:03.322 12:00:16 -- common/autotest_common.sh@819 -- # '[' -z 1272140 ']' 00:06:03.322 12:00:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.322 12:00:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.322 12:00:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.322 12:00:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.322 12:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.322 [2024-06-11 12:00:16.180034] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:03.322 [2024-06-11 12:00:16.180084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272140 ] 00:06:03.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.322 [2024-06-11 12:00:16.240138] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.322 [2024-06-11 12:00:16.240173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.322 [2024-06-11 12:00:16.267112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.322 [2024-06-11 12:00:16.267242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.265 12:00:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.265 12:00:16 -- common/autotest_common.sh@852 -- # return 0 00:06:04.265 12:00:16 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1272215 00:06:04.265 12:00:16 -- event/cpu_locks.sh@103 -- # waitforlisten 1272215 /var/tmp/spdk2.sock 00:06:04.265 12:00:16 -- common/autotest_common.sh@819 -- # '[' -z 1272215 ']' 00:06:04.265 12:00:16 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.265 12:00:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.265 12:00:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.265 12:00:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.265 12:00:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.265 12:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:04.265 [2024-06-11 12:00:16.985538] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:04.265 [2024-06-11 12:00:16.985589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272215 ] 00:06:04.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.265 [2024-06-11 12:00:17.076635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.265 [2024-06-11 12:00:17.137565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.265 [2024-06-11 12:00:17.137703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.837 12:00:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.837 12:00:17 -- common/autotest_common.sh@852 -- # return 0 00:06:04.837 12:00:17 -- event/cpu_locks.sh@105 -- # locks_exist 1272215 00:06:04.837 12:00:17 -- event/cpu_locks.sh@22 -- # lslocks -p 1272215 00:06:04.837 12:00:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.411 lslocks: write error 00:06:05.411 12:00:18 -- event/cpu_locks.sh@107 -- # killprocess 1272140 00:06:05.411 12:00:18 -- common/autotest_common.sh@926 -- # '[' -z 1272140 ']' 00:06:05.411 12:00:18 -- common/autotest_common.sh@930 -- # kill -0 1272140 00:06:05.411 12:00:18 -- common/autotest_common.sh@931 -- # uname 00:06:05.411 12:00:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.411 12:00:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1272140 00:06:05.411 12:00:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.411 12:00:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.411 12:00:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1272140' 00:06:05.411 killing process with pid 1272140 00:06:05.411 12:00:18 -- common/autotest_common.sh@945 -- # kill 1272140 00:06:05.411 12:00:18 -- common/autotest_common.sh@950 -- # wait 1272140 00:06:05.673 12:00:18 -- event/cpu_locks.sh@108 -- # killprocess 1272215 00:06:05.673 12:00:18 -- common/autotest_common.sh@926 -- # '[' -z 1272215 ']' 00:06:05.673 12:00:18 -- common/autotest_common.sh@930 -- # kill -0 1272215 00:06:05.673 12:00:18 -- common/autotest_common.sh@931 -- # uname 00:06:05.673 12:00:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.673 12:00:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1272215 00:06:05.934 12:00:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.934 12:00:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.934 12:00:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1272215' 00:06:05.934 killing process with pid 1272215 00:06:05.934 12:00:18 -- common/autotest_common.sh@945 -- # kill 1272215 00:06:05.934 12:00:18 -- common/autotest_common.sh@950 -- # wait 1272215 00:06:05.934 00:06:05.934 real 0m2.828s 00:06:05.934 user 0m3.086s 00:06:05.934 sys 0m0.851s 00:06:05.934 12:00:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.934 12:00:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.934 ************************************ 00:06:05.934 END TEST locking_app_on_unlocked_coremask 00:06:05.934 ************************************ 00:06:06.195 12:00:18 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:06.195 12:00:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.195 12:00:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.195 12:00:18 -- common/autotest_common.sh@10 -- # set +x 00:06:06.195 ************************************ 00:06:06.196 START TEST locking_app_on_locked_coremask 00:06:06.196 ************************************ 00:06:06.196 12:00:18 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:06.196 12:00:18 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1272783 00:06:06.196 12:00:18 -- event/cpu_locks.sh@116 -- # waitforlisten 1272783 /var/tmp/spdk.sock 00:06:06.196 12:00:18 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.196 12:00:18 -- common/autotest_common.sh@819 -- # '[' -z 1272783 ']' 00:06:06.196 12:00:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.196 12:00:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.196 12:00:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.196 12:00:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.196 12:00:18 -- common/autotest_common.sh@10 -- # set +x 00:06:06.196 [2024-06-11 12:00:19.044317] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:06.196 [2024-06-11 12:00:19.044369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272783 ] 00:06:06.196 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.196 [2024-06-11 12:00:19.104205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.196 [2024-06-11 12:00:19.132453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.196 [2024-06-11 12:00:19.132590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.767 12:00:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.029 12:00:19 -- common/autotest_common.sh@852 -- # return 0 00:06:07.029 12:00:19 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1272869 00:06:07.029 12:00:19 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1272869 /var/tmp/spdk2.sock 00:06:07.029 12:00:19 -- common/autotest_common.sh@640 -- # local es=0 00:06:07.029 12:00:19 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.029 12:00:19 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1272869 /var/tmp/spdk2.sock 00:06:07.029 12:00:19 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:07.029 12:00:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:07.029 12:00:19 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:07.029 12:00:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:07.029 12:00:19 -- common/autotest_common.sh@643 -- # waitforlisten 1272869 /var/tmp/spdk2.sock 00:06:07.029 12:00:19 -- common/autotest_common.sh@819 -- # '[' -z 1272869 ']' 00:06:07.029 12:00:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.029 12:00:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.029 12:00:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.029 12:00:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.029 12:00:19 -- common/autotest_common.sh@10 -- # set +x 00:06:07.029 [2024-06-11 12:00:19.828778] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:07.029 [2024-06-11 12:00:19.828823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272869 ] 00:06:07.029 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.029 [2024-06-11 12:00:19.909959] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1272783 has claimed it. 00:06:07.029 [2024-06-11 12:00:19.909998] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1272869) - No such process 00:06:07.600 ERROR: process (pid: 1272869) is no longer running 00:06:07.600 12:00:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.600 12:00:20 -- common/autotest_common.sh@852 -- # return 1 00:06:07.600 12:00:20 -- common/autotest_common.sh@643 -- # es=1 00:06:07.600 12:00:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:07.600 12:00:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:07.600 12:00:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:07.600 12:00:20 -- event/cpu_locks.sh@122 -- # locks_exist 1272783 00:06:07.600 12:00:20 -- event/cpu_locks.sh@22 -- # lslocks -p 1272783 00:06:07.600 12:00:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.174 lslocks: write error 00:06:08.174 12:00:20 -- event/cpu_locks.sh@124 -- # killprocess 1272783 00:06:08.174 12:00:20 -- common/autotest_common.sh@926 -- # '[' -z 1272783 ']' 00:06:08.174 12:00:20 -- common/autotest_common.sh@930 -- # kill -0 1272783 00:06:08.175 12:00:20 -- common/autotest_common.sh@931 -- # uname 00:06:08.175 12:00:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.175 12:00:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1272783 00:06:08.175 12:00:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.175 12:00:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.175 12:00:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1272783' 00:06:08.175 killing process with pid 1272783 00:06:08.175 12:00:21 -- common/autotest_common.sh@945 -- # kill 1272783 00:06:08.175 12:00:21 -- common/autotest_common.sh@950 -- # wait 1272783 00:06:08.436 00:06:08.436 real 0m2.226s 00:06:08.436 user 0m2.458s 00:06:08.436 sys 0m0.584s 00:06:08.436 12:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.436 12:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:08.436 ************************************ 00:06:08.436 END TEST locking_app_on_locked_coremask 00:06:08.436 ************************************ 00:06:08.436 12:00:21 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.436 12:00:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.436 12:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.436 12:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:08.436 ************************************ 00:06:08.436 START TEST locking_overlapped_coremask 00:06:08.436 ************************************ 00:06:08.436 12:00:21 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:08.436 12:00:21 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1273232 00:06:08.436 12:00:21 -- event/cpu_locks.sh@133 -- # waitforlisten 1273232 /var/tmp/spdk.sock 00:06:08.436 12:00:21 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.436 12:00:21 -- common/autotest_common.sh@819 -- # '[' -z 1273232 ']' 00:06:08.436 12:00:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.436 12:00:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.436 12:00:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.436 12:00:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.436 12:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:08.436 [2024-06-11 12:00:21.311578] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:08.436 [2024-06-11 12:00:21.311641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273232 ] 00:06:08.436 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.436 [2024-06-11 12:00:21.376033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.436 [2024-06-11 12:00:21.407730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.436 [2024-06-11 12:00:21.407886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.436 [2024-06-11 12:00:21.407985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.436 [2024-06-11 12:00:21.407989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.378 12:00:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.378 12:00:22 -- common/autotest_common.sh@852 -- # return 0 00:06:09.378 12:00:22 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1273356 00:06:09.379 12:00:22 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1273356 /var/tmp/spdk2.sock 00:06:09.379 12:00:22 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.379 12:00:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:09.379 12:00:22 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1273356 /var/tmp/spdk2.sock 00:06:09.379 12:00:22 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:09.379 12:00:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.379 12:00:22 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:09.379 12:00:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.379 12:00:22 -- common/autotest_common.sh@643 -- # waitforlisten 1273356 /var/tmp/spdk2.sock 00:06:09.379 12:00:22 -- common/autotest_common.sh@819 -- # '[' -z 1273356 ']' 00:06:09.379 12:00:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.379 12:00:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.379 12:00:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.379 12:00:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.379 12:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:09.379 [2024-06-11 12:00:22.118406] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:09.379 [2024-06-11 12:00:22.118457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273356 ] 00:06:09.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.379 [2024-06-11 12:00:22.189790] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1273232 has claimed it. 00:06:09.379 [2024-06-11 12:00:22.189820] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1273356) - No such process 00:06:09.952 ERROR: process (pid: 1273356) is no longer running 00:06:09.952 12:00:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.952 12:00:22 -- common/autotest_common.sh@852 -- # return 1 00:06:09.952 12:00:22 -- common/autotest_common.sh@643 -- # es=1 00:06:09.952 12:00:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:09.952 12:00:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:09.952 12:00:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:09.952 12:00:22 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.952 12:00:22 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.952 12:00:22 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.952 12:00:22 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.952 12:00:22 -- event/cpu_locks.sh@141 -- # killprocess 1273232 00:06:09.952 12:00:22 -- common/autotest_common.sh@926 -- # '[' -z 1273232 ']' 00:06:09.952 12:00:22 -- common/autotest_common.sh@930 -- # kill -0 1273232 00:06:09.952 12:00:22 -- common/autotest_common.sh@931 -- # uname 00:06:09.952 12:00:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:09.952 12:00:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1273232 00:06:09.952 12:00:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:09.952 12:00:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:09.952 12:00:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1273232' 00:06:09.952 killing process with pid 1273232 00:06:09.952 12:00:22 -- common/autotest_common.sh@945 -- # kill 1273232 00:06:09.952 12:00:22 -- common/autotest_common.sh@950 -- # wait 1273232 00:06:09.952 00:06:09.952 real 0m1.693s 00:06:09.952 user 0m4.862s 00:06:09.952 sys 0m0.354s 00:06:09.953 12:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.953 12:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:09.953 ************************************ 00:06:09.953 END TEST locking_overlapped_coremask 00:06:09.953 ************************************ 00:06:10.213 12:00:22 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.213 12:00:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.213 12:00:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.213 12:00:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.213 ************************************ 00:06:10.213 START TEST locking_overlapped_coremask_via_rpc 00:06:10.213 ************************************ 00:06:10.213 12:00:22 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:10.213 12:00:22 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1273608 00:06:10.213 12:00:23 -- event/cpu_locks.sh@149 -- # waitforlisten 1273608 /var/tmp/spdk.sock 00:06:10.213 12:00:22 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.213 12:00:23 -- common/autotest_common.sh@819 -- # '[' -z 1273608 ']' 00:06:10.213 12:00:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.213 12:00:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.213 12:00:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.213 12:00:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.213 12:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:10.213 [2024-06-11 12:00:23.050731] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:10.213 [2024-06-11 12:00:23.050794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273608 ] 00:06:10.213 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.213 [2024-06-11 12:00:23.111434] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.213 [2024-06-11 12:00:23.111464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.213 [2024-06-11 12:00:23.142646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.213 [2024-06-11 12:00:23.142899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.213 [2024-06-11 12:00:23.143023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.213 [2024-06-11 12:00:23.143025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.784 12:00:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.784 12:00:23 -- common/autotest_common.sh@852 -- # return 0 00:06:10.784 12:00:23 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.785 12:00:23 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1273766 00:06:10.785 12:00:23 -- event/cpu_locks.sh@153 -- # waitforlisten 1273766 /var/tmp/spdk2.sock 00:06:10.785 12:00:23 -- common/autotest_common.sh@819 -- # '[' -z 1273766 ']' 00:06:10.785 12:00:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.785 12:00:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.785 12:00:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.785 12:00:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.785 12:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:11.046 [2024-06-11 12:00:23.831538] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:11.047 [2024-06-11 12:00:23.831588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273766 ] 00:06:11.047 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.047 [2024-06-11 12:00:23.903990] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.047 [2024-06-11 12:00:23.904010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.047 [2024-06-11 12:00:23.959284] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.047 [2024-06-11 12:00:23.959507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.047 [2024-06-11 12:00:23.959666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.047 [2024-06-11 12:00:23.959669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.619 12:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.619 12:00:24 -- common/autotest_common.sh@852 -- # return 0 00:06:11.619 12:00:24 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.619 12:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.619 12:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.619 12:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:11.619 12:00:24 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.619 12:00:24 -- common/autotest_common.sh@640 -- # local es=0 00:06:11.619 12:00:24 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.619 12:00:24 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:11.619 12:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:11.619 12:00:24 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:11.619 12:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:11.619 12:00:24 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.619 12:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:11.619 12:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.619 [2024-06-11 12:00:24.619076] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1273608 has claimed it. 00:06:11.619 request: 00:06:11.619 { 00:06:11.619 "method": "framework_enable_cpumask_locks", 00:06:11.619 "req_id": 1 00:06:11.619 } 00:06:11.619 Got JSON-RPC error response 00:06:11.619 response: 00:06:11.619 { 00:06:11.619 "code": -32603, 00:06:11.619 "message": "Failed to claim CPU core: 2" 00:06:11.619 } 00:06:11.619 12:00:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:11.619 12:00:24 -- common/autotest_common.sh@643 -- # es=1 00:06:11.619 12:00:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:11.619 12:00:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:11.619 12:00:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:11.619 12:00:24 -- event/cpu_locks.sh@158 -- # waitforlisten 1273608 /var/tmp/spdk.sock 00:06:11.619 12:00:24 -- common/autotest_common.sh@819 -- # '[' -z 1273608 ']' 00:06:11.619 12:00:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.619 12:00:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.619 12:00:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.619 12:00:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.619 12:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.880 12:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.881 12:00:24 -- common/autotest_common.sh@852 -- # return 0 00:06:11.881 12:00:24 -- event/cpu_locks.sh@159 -- # waitforlisten 1273766 /var/tmp/spdk2.sock 00:06:11.881 12:00:24 -- common/autotest_common.sh@819 -- # '[' -z 1273766 ']' 00:06:11.881 12:00:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.881 12:00:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.881 12:00:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.881 12:00:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.881 12:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:12.142 12:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.142 12:00:24 -- common/autotest_common.sh@852 -- # return 0 00:06:12.142 12:00:24 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:12.142 12:00:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.142 12:00:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.142 12:00:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.142 00:06:12.142 real 0m1.954s 00:06:12.142 user 0m0.748s 00:06:12.142 sys 0m0.132s 00:06:12.142 12:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.142 12:00:24 -- common/autotest_common.sh@10 -- # set +x 00:06:12.142 ************************************ 00:06:12.142 END TEST locking_overlapped_coremask_via_rpc 00:06:12.142 ************************************ 00:06:12.142 12:00:24 -- event/cpu_locks.sh@174 -- # cleanup 00:06:12.142 12:00:24 -- event/cpu_locks.sh@15 -- # [[ -z 1273608 ]] 00:06:12.142 12:00:24 -- event/cpu_locks.sh@15 -- # killprocess 1273608 00:06:12.142 12:00:24 -- common/autotest_common.sh@926 -- # '[' -z 1273608 ']' 00:06:12.142 12:00:24 -- common/autotest_common.sh@930 -- # kill -0 1273608 00:06:12.142 12:00:24 -- common/autotest_common.sh@931 -- # uname 00:06:12.142 12:00:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:12.142 12:00:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1273608 00:06:12.142 12:00:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:12.142 12:00:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:12.142 12:00:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1273608' 00:06:12.142 killing process with pid 1273608 00:06:12.142 12:00:25 -- common/autotest_common.sh@945 -- # kill 1273608 00:06:12.142 12:00:25 -- common/autotest_common.sh@950 -- # wait 1273608 00:06:12.402 12:00:25 -- event/cpu_locks.sh@16 -- # [[ -z 1273766 ]] 00:06:12.402 12:00:25 -- event/cpu_locks.sh@16 -- # killprocess 1273766 00:06:12.402 12:00:25 -- common/autotest_common.sh@926 -- # '[' -z 1273766 ']' 00:06:12.402 12:00:25 -- common/autotest_common.sh@930 -- # kill -0 1273766 00:06:12.402 12:00:25 -- common/autotest_common.sh@931 -- # uname 00:06:12.402 12:00:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:12.402 12:00:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1273766 00:06:12.402 12:00:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:12.402 12:00:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:12.402 12:00:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1273766' 00:06:12.402 killing process with pid 1273766 00:06:12.402 12:00:25 -- common/autotest_common.sh@945 -- # kill 1273766 00:06:12.402 12:00:25 -- common/autotest_common.sh@950 -- # wait 1273766 00:06:12.663 12:00:25 -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.663 12:00:25 -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.663 12:00:25 -- event/cpu_locks.sh@15 -- # [[ -z 1273608 ]] 00:06:12.663 12:00:25 -- event/cpu_locks.sh@15 -- # killprocess 1273608 00:06:12.663 12:00:25 -- common/autotest_common.sh@926 -- # '[' -z 1273608 ']' 00:06:12.663 12:00:25 -- common/autotest_common.sh@930 -- # kill -0 1273608 00:06:12.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1273608) - No such process 00:06:12.663 12:00:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1273608 is not found' 00:06:12.663 Process with pid 1273608 is not found 00:06:12.663 12:00:25 -- event/cpu_locks.sh@16 -- # [[ -z 1273766 ]] 00:06:12.663 12:00:25 -- event/cpu_locks.sh@16 -- # killprocess 1273766 00:06:12.663 12:00:25 -- common/autotest_common.sh@926 -- # '[' -z 1273766 ']' 00:06:12.663 12:00:25 -- common/autotest_common.sh@930 -- # kill -0 1273766 00:06:12.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1273766) - No such process 00:06:12.663 12:00:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1273766 is not found' 00:06:12.663 Process with pid 1273766 is not found 00:06:12.663 12:00:25 -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.663 00:06:12.663 real 0m15.206s 00:06:12.663 user 0m26.583s 00:06:12.663 sys 0m4.481s 00:06:12.663 12:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.663 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.663 ************************************ 00:06:12.663 END TEST cpu_locks 00:06:12.663 ************************************ 00:06:12.663 00:06:12.663 real 0m40.636s 00:06:12.663 user 1m20.336s 00:06:12.663 sys 0m7.409s 00:06:12.663 12:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.663 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.663 ************************************ 00:06:12.663 END TEST event 00:06:12.663 ************************************ 00:06:12.663 12:00:25 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:12.663 12:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.663 12:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.663 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.664 ************************************ 00:06:12.664 START TEST thread 00:06:12.664 ************************************ 00:06:12.664 12:00:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:12.664 * Looking for test storage... 00:06:12.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:12.664 12:00:25 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.664 12:00:25 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:12.664 12:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.664 12:00:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.664 ************************************ 00:06:12.664 START TEST thread_poller_perf 00:06:12.664 ************************************ 00:06:12.664 12:00:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.664 [2024-06-11 12:00:25.691442] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:12.664 [2024-06-11 12:00:25.691551] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274279 ] 00:06:12.925 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.925 [2024-06-11 12:00:25.761625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.925 [2024-06-11 12:00:25.797808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.925 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:13.865 ====================================== 00:06:13.865 busy:2414961632 (cyc) 00:06:13.865 total_run_count: 276000 00:06:13.865 tsc_hz: 2400000000 (cyc) 00:06:13.865 ====================================== 00:06:13.865 poller_cost: 8749 (cyc), 3645 (nsec) 00:06:13.865 00:06:13.865 real 0m1.176s 00:06:13.866 user 0m1.091s 00:06:13.866 sys 0m0.081s 00:06:13.866 12:00:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.866 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.866 ************************************ 00:06:13.866 END TEST thread_poller_perf 00:06:13.866 ************************************ 00:06:13.866 12:00:26 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.866 12:00:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:13.866 12:00:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.866 12:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.866 ************************************ 00:06:13.866 START TEST thread_poller_perf 00:06:13.866 ************************************ 00:06:13.866 12:00:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.126 [2024-06-11 12:00:26.910355] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:14.126 [2024-06-11 12:00:26.910448] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274421 ] 00:06:14.126 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.126 [2024-06-11 12:00:26.974139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.126 [2024-06-11 12:00:27.001143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.126 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:15.066 ====================================== 00:06:15.066 busy:2402731040 (cyc) 00:06:15.066 total_run_count: 3802000 00:06:15.066 tsc_hz: 2400000000 (cyc) 00:06:15.066 ====================================== 00:06:15.066 poller_cost: 631 (cyc), 262 (nsec) 00:06:15.066 00:06:15.066 real 0m1.150s 00:06:15.066 user 0m1.085s 00:06:15.066 sys 0m0.061s 00:06:15.066 12:00:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.066 12:00:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 ************************************ 00:06:15.066 END TEST thread_poller_perf 00:06:15.066 ************************************ 00:06:15.066 12:00:28 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:15.066 00:06:15.066 real 0m2.511s 00:06:15.066 user 0m2.259s 00:06:15.066 sys 0m0.265s 00:06:15.066 12:00:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.066 12:00:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 ************************************ 00:06:15.066 END TEST thread 00:06:15.066 ************************************ 00:06:15.329 12:00:28 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:15.329 12:00:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.329 12:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.329 12:00:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.329 ************************************ 00:06:15.329 START TEST accel 00:06:15.329 ************************************ 00:06:15.329 12:00:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:15.329 * Looking for test storage... 00:06:15.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:15.329 12:00:28 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:15.329 12:00:28 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:15.329 12:00:28 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.329 12:00:28 -- accel/accel.sh@59 -- # spdk_tgt_pid=1274804 00:06:15.329 12:00:28 -- accel/accel.sh@60 -- # waitforlisten 1274804 00:06:15.329 12:00:28 -- common/autotest_common.sh@819 -- # '[' -z 1274804 ']' 00:06:15.329 12:00:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.329 12:00:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.329 12:00:28 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:15.329 12:00:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.329 12:00:28 -- accel/accel.sh@58 -- # build_accel_config 00:06:15.329 12:00:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.329 12:00:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.329 12:00:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.329 12:00:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.329 12:00:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.329 12:00:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.329 12:00:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.329 12:00:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.329 12:00:28 -- accel/accel.sh@42 -- # jq -r . 00:06:15.329 [2024-06-11 12:00:28.259070] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:15.329 [2024-06-11 12:00:28.259143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274804 ] 00:06:15.329 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.329 [2024-06-11 12:00:28.325196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.329 [2024-06-11 12:00:28.361762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.329 [2024-06-11 12:00:28.361910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.270 12:00:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.270 12:00:29 -- common/autotest_common.sh@852 -- # return 0 00:06:16.270 12:00:29 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:16.270 12:00:29 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:16.270 12:00:29 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:16.270 12:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.270 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.270 12:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # IFS== 00:06:16.270 12:00:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:16.270 12:00:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:16.270 12:00:29 -- accel/accel.sh@67 -- # killprocess 1274804 00:06:16.270 12:00:29 -- common/autotest_common.sh@926 -- # '[' -z 1274804 ']' 00:06:16.271 12:00:29 -- common/autotest_common.sh@930 -- # kill -0 1274804 00:06:16.271 12:00:29 -- common/autotest_common.sh@931 -- # uname 00:06:16.271 12:00:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.271 12:00:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1274804 00:06:16.271 12:00:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:16.271 12:00:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:16.271 12:00:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1274804' 00:06:16.271 killing process with pid 1274804 00:06:16.271 12:00:29 -- common/autotest_common.sh@945 -- # kill 1274804 00:06:16.271 12:00:29 -- common/autotest_common.sh@950 -- # wait 1274804 00:06:16.531 12:00:29 -- accel/accel.sh@68 -- # trap - ERR 00:06:16.531 12:00:29 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:16.531 12:00:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:16.531 12:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.531 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.531 12:00:29 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:16.531 12:00:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:16.531 12:00:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.531 12:00:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.531 12:00:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.531 12:00:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.531 12:00:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.531 12:00:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.531 12:00:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.531 12:00:29 -- accel/accel.sh@42 -- # jq -r . 00:06:16.531 12:00:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.531 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.531 12:00:29 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:16.531 12:00:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:16.531 12:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.531 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.531 ************************************ 00:06:16.531 START TEST accel_missing_filename 00:06:16.531 ************************************ 00:06:16.531 12:00:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:16.531 12:00:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:16.531 12:00:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:16.531 12:00:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:16.531 12:00:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.531 12:00:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:16.531 12:00:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.531 12:00:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:16.531 12:00:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:16.531 12:00:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.531 12:00:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.531 12:00:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.531 12:00:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.531 12:00:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.531 12:00:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.531 12:00:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.531 12:00:29 -- accel/accel.sh@42 -- # jq -r . 00:06:16.531 [2024-06-11 12:00:29.417997] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:16.531 [2024-06-11 12:00:29.418079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275172 ] 00:06:16.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.531 [2024-06-11 12:00:29.480445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.531 [2024-06-11 12:00:29.508316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.531 [2024-06-11 12:00:29.539951] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.791 [2024-06-11 12:00:29.576799] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:16.791 A filename is required. 00:06:16.791 12:00:29 -- common/autotest_common.sh@643 -- # es=234 00:06:16.791 12:00:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:16.791 12:00:29 -- common/autotest_common.sh@652 -- # es=106 00:06:16.791 12:00:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:16.791 12:00:29 -- common/autotest_common.sh@660 -- # es=1 00:06:16.791 12:00:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:16.791 00:06:16.791 real 0m0.228s 00:06:16.791 user 0m0.169s 00:06:16.791 sys 0m0.100s 00:06:16.791 12:00:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.791 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.791 ************************************ 00:06:16.791 END TEST accel_missing_filename 00:06:16.791 ************************************ 00:06:16.791 12:00:29 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:16.791 12:00:29 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:16.791 12:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.791 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.791 ************************************ 00:06:16.791 START TEST accel_compress_verify 00:06:16.791 ************************************ 00:06:16.791 12:00:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:16.791 12:00:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:16.791 12:00:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:16.791 12:00:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:16.791 12:00:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.791 12:00:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:16.791 12:00:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:16.791 12:00:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:16.791 12:00:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:16.791 12:00:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.791 12:00:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.791 12:00:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.791 12:00:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.791 12:00:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.791 12:00:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.791 12:00:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.791 12:00:29 -- accel/accel.sh@42 -- # jq -r . 00:06:16.792 [2024-06-11 12:00:29.688961] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:16.792 [2024-06-11 12:00:29.689057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275197 ] 00:06:16.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.792 [2024-06-11 12:00:29.750000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.792 [2024-06-11 12:00:29.776417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.792 [2024-06-11 12:00:29.808090] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.052 [2024-06-11 12:00:29.844924] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:17.053 00:06:17.053 Compression does not support the verify option, aborting. 00:06:17.053 12:00:29 -- common/autotest_common.sh@643 -- # es=161 00:06:17.053 12:00:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.053 12:00:29 -- common/autotest_common.sh@652 -- # es=33 00:06:17.053 12:00:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:17.053 12:00:29 -- common/autotest_common.sh@660 -- # es=1 00:06:17.053 12:00:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.053 00:06:17.053 real 0m0.226s 00:06:17.053 user 0m0.169s 00:06:17.053 sys 0m0.098s 00:06:17.053 12:00:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.053 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.053 ************************************ 00:06:17.053 END TEST accel_compress_verify 00:06:17.053 ************************************ 00:06:17.053 12:00:29 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:17.053 12:00:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:17.053 12:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.053 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.053 ************************************ 00:06:17.053 START TEST accel_wrong_workload 00:06:17.053 ************************************ 00:06:17.053 12:00:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:17.053 12:00:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.053 12:00:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:17.053 12:00:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:17.053 12:00:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.053 12:00:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:17.053 12:00:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.053 12:00:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:17.053 12:00:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:17.053 12:00:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.053 12:00:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.053 12:00:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.053 12:00:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.053 12:00:29 -- accel/accel.sh@42 -- # jq -r . 00:06:17.053 Unsupported workload type: foobar 00:06:17.053 [2024-06-11 12:00:29.953090] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:17.053 accel_perf options: 00:06:17.053 [-h help message] 00:06:17.053 [-q queue depth per core] 00:06:17.053 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.053 [-T number of threads per core 00:06:17.053 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.053 [-t time in seconds] 00:06:17.053 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.053 [ dif_verify, , dif_generate, dif_generate_copy 00:06:17.053 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.053 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.053 [-S for crc32c workload, use this seed value (default 0) 00:06:17.053 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.053 [-f for fill workload, use this BYTE value (default 255) 00:06:17.053 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.053 [-y verify result if this switch is on] 00:06:17.053 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.053 Can be used to spread operations across a wider range of memory. 00:06:17.053 12:00:29 -- common/autotest_common.sh@643 -- # es=1 00:06:17.053 12:00:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.053 12:00:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.053 12:00:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.053 00:06:17.053 real 0m0.034s 00:06:17.053 user 0m0.019s 00:06:17.053 sys 0m0.014s 00:06:17.053 12:00:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.053 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.053 ************************************ 00:06:17.053 END TEST accel_wrong_workload 00:06:17.053 ************************************ 00:06:17.053 Error: writing output failed: Broken pipe 00:06:17.053 12:00:29 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.053 12:00:29 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:17.053 12:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.053 12:00:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.053 ************************************ 00:06:17.053 START TEST accel_negative_buffers 00:06:17.053 ************************************ 00:06:17.053 12:00:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:17.053 12:00:30 -- common/autotest_common.sh@640 -- # local es=0 00:06:17.053 12:00:30 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:17.053 12:00:30 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:17.053 12:00:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.053 12:00:30 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:17.053 12:00:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:17.053 12:00:30 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:17.053 12:00:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:17.053 12:00:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.053 12:00:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.053 12:00:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.053 12:00:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.053 12:00:30 -- accel/accel.sh@42 -- # jq -r . 00:06:17.053 -x option must be non-negative. 00:06:17.053 [2024-06-11 12:00:30.028331] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:17.053 accel_perf options: 00:06:17.053 [-h help message] 00:06:17.053 [-q queue depth per core] 00:06:17.053 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:17.053 [-T number of threads per core 00:06:17.053 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:17.053 [-t time in seconds] 00:06:17.053 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:17.053 [ dif_verify, , dif_generate, dif_generate_copy 00:06:17.053 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:17.053 [-l for compress/decompress workloads, name of uncompressed input file 00:06:17.053 [-S for crc32c workload, use this seed value (default 0) 00:06:17.053 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:17.053 [-f for fill workload, use this BYTE value (default 255) 00:06:17.053 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:17.053 [-y verify result if this switch is on] 00:06:17.053 [-a tasks to allocate per core (default: same value as -q)] 00:06:17.053 Can be used to spread operations across a wider range of memory. 00:06:17.053 12:00:30 -- common/autotest_common.sh@643 -- # es=1 00:06:17.053 12:00:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:17.053 12:00:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:17.053 12:00:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:17.053 00:06:17.053 real 0m0.034s 00:06:17.053 user 0m0.021s 00:06:17.053 sys 0m0.012s 00:06:17.053 12:00:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.053 12:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:17.053 ************************************ 00:06:17.053 END TEST accel_negative_buffers 00:06:17.053 ************************************ 00:06:17.053 Error: writing output failed: Broken pipe 00:06:17.053 12:00:30 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:17.053 12:00:30 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:17.053 12:00:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.053 12:00:30 -- common/autotest_common.sh@10 -- # set +x 00:06:17.053 ************************************ 00:06:17.053 START TEST accel_crc32c 00:06:17.053 ************************************ 00:06:17.053 12:00:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:17.053 12:00:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.053 12:00:30 -- accel/accel.sh@17 -- # local accel_module 00:06:17.053 12:00:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:17.053 12:00:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:17.053 12:00:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.053 12:00:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.053 12:00:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.053 12:00:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.053 12:00:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.053 12:00:30 -- accel/accel.sh@42 -- # jq -r . 00:06:17.314 [2024-06-11 12:00:30.099736] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:17.314 [2024-06-11 12:00:30.099806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275253 ] 00:06:17.314 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.314 [2024-06-11 12:00:30.162961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.314 [2024-06-11 12:00:30.191983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.697 12:00:31 -- accel/accel.sh@18 -- # out=' 00:06:18.697 SPDK Configuration: 00:06:18.697 Core mask: 0x1 00:06:18.697 00:06:18.697 Accel Perf Configuration: 00:06:18.697 Workload Type: crc32c 00:06:18.697 CRC-32C seed: 32 00:06:18.697 Transfer size: 4096 bytes 00:06:18.697 Vector count 1 00:06:18.697 Module: software 00:06:18.697 Queue depth: 32 00:06:18.697 Allocate depth: 32 00:06:18.697 # threads/core: 1 00:06:18.697 Run time: 1 seconds 00:06:18.697 Verify: Yes 00:06:18.697 00:06:18.697 Running for 1 seconds... 00:06:18.697 00:06:18.697 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.697 ------------------------------------------------------------------------------------ 00:06:18.697 0,0 449184/s 1754 MiB/s 0 0 00:06:18.697 ==================================================================================== 00:06:18.697 Total 449184/s 1754 MiB/s 0 0' 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:18.697 12:00:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:18.697 12:00:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.697 12:00:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.697 12:00:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.697 12:00:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.697 12:00:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.697 12:00:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.697 12:00:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.697 12:00:31 -- accel/accel.sh@42 -- # jq -r . 00:06:18.697 [2024-06-11 12:00:31.331143] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:18.697 [2024-06-11 12:00:31.331243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275587 ] 00:06:18.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.697 [2024-06-11 12:00:31.402699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.697 [2024-06-11 12:00:31.430691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val= 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val= 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=0x1 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val= 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val= 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=crc32c 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=32 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val= 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=software 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=32 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=32 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=1 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val=Yes 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val= 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.697 12:00:31 -- accel/accel.sh@21 -- # val= 00:06:18.697 12:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # IFS=: 00:06:18.697 12:00:31 -- accel/accel.sh@20 -- # read -r var val 00:06:19.638 12:00:32 -- accel/accel.sh@21 -- # val= 00:06:19.638 12:00:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.638 12:00:32 -- accel/accel.sh@20 -- # IFS=: 00:06:19.638 12:00:32 -- accel/accel.sh@20 -- # read -r var val 00:06:19.638 12:00:32 -- accel/accel.sh@21 -- # val= 00:06:19.638 12:00:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.638 12:00:32 -- accel/accel.sh@20 -- # IFS=: 00:06:19.638 12:00:32 -- accel/accel.sh@20 -- # read -r var val 00:06:19.638 12:00:32 -- accel/accel.sh@21 -- # val= 00:06:19.639 12:00:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # IFS=: 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # read -r var val 00:06:19.639 12:00:32 -- accel/accel.sh@21 -- # val= 00:06:19.639 12:00:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # IFS=: 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # read -r var val 00:06:19.639 12:00:32 -- accel/accel.sh@21 -- # val= 00:06:19.639 12:00:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # IFS=: 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # read -r var val 00:06:19.639 12:00:32 -- accel/accel.sh@21 -- # val= 00:06:19.639 12:00:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # IFS=: 00:06:19.639 12:00:32 -- accel/accel.sh@20 -- # read -r var val 00:06:19.639 12:00:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.639 12:00:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:19.639 12:00:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.639 00:06:19.639 real 0m2.474s 00:06:19.639 user 0m2.270s 00:06:19.639 sys 0m0.209s 00:06:19.639 12:00:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.639 12:00:32 -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 ************************************ 00:06:19.639 END TEST accel_crc32c 00:06:19.639 ************************************ 00:06:19.639 12:00:32 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:19.639 12:00:32 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:19.639 12:00:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.639 12:00:32 -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 ************************************ 00:06:19.639 START TEST accel_crc32c_C2 00:06:19.639 ************************************ 00:06:19.639 12:00:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:19.639 12:00:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.639 12:00:32 -- accel/accel.sh@17 -- # local accel_module 00:06:19.639 12:00:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:19.639 12:00:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:19.639 12:00:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.639 12:00:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.639 12:00:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.639 12:00:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.639 12:00:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.639 12:00:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.639 12:00:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.639 12:00:32 -- accel/accel.sh@42 -- # jq -r . 00:06:19.639 [2024-06-11 12:00:32.616132] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:19.639 [2024-06-11 12:00:32.616225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275913 ] 00:06:19.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.899 [2024-06-11 12:00:32.679444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.899 [2024-06-11 12:00:32.709534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.839 12:00:33 -- accel/accel.sh@18 -- # out=' 00:06:20.839 SPDK Configuration: 00:06:20.839 Core mask: 0x1 00:06:20.839 00:06:20.839 Accel Perf Configuration: 00:06:20.839 Workload Type: crc32c 00:06:20.839 CRC-32C seed: 0 00:06:20.839 Transfer size: 4096 bytes 00:06:20.839 Vector count 2 00:06:20.839 Module: software 00:06:20.839 Queue depth: 32 00:06:20.839 Allocate depth: 32 00:06:20.839 # threads/core: 1 00:06:20.839 Run time: 1 seconds 00:06:20.839 Verify: Yes 00:06:20.839 00:06:20.839 Running for 1 seconds... 00:06:20.839 00:06:20.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.839 ------------------------------------------------------------------------------------ 00:06:20.839 0,0 378272/s 2955 MiB/s 0 0 00:06:20.839 ==================================================================================== 00:06:20.839 Total 378272/s 1477 MiB/s 0 0' 00:06:20.839 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:20.839 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:20.839 12:00:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:20.839 12:00:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:20.839 12:00:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.839 12:00:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.839 12:00:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.839 12:00:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.839 12:00:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.839 12:00:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.839 12:00:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.839 12:00:33 -- accel/accel.sh@42 -- # jq -r . 00:06:20.839 [2024-06-11 12:00:33.848097] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:20.839 [2024-06-11 12:00:33.848177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276013 ] 00:06:21.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.101 [2024-06-11 12:00:33.910104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.101 [2024-06-11 12:00:33.938090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val= 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val= 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=0x1 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val= 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val= 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=crc32c 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=0 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val= 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=software 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=32 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=32 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=1 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val=Yes 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val= 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:21.101 12:00:33 -- accel/accel.sh@21 -- # val= 00:06:21.101 12:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # IFS=: 00:06:21.101 12:00:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.042 12:00:35 -- accel/accel.sh@21 -- # val= 00:06:22.042 12:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # IFS=: 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # read -r var val 00:06:22.042 12:00:35 -- accel/accel.sh@21 -- # val= 00:06:22.042 12:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # IFS=: 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # read -r var val 00:06:22.042 12:00:35 -- accel/accel.sh@21 -- # val= 00:06:22.042 12:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # IFS=: 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # read -r var val 00:06:22.042 12:00:35 -- accel/accel.sh@21 -- # val= 00:06:22.042 12:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # IFS=: 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # read -r var val 00:06:22.042 12:00:35 -- accel/accel.sh@21 -- # val= 00:06:22.042 12:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # IFS=: 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # read -r var val 00:06:22.042 12:00:35 -- accel/accel.sh@21 -- # val= 00:06:22.042 12:00:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # IFS=: 00:06:22.042 12:00:35 -- accel/accel.sh@20 -- # read -r var val 00:06:22.042 12:00:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.042 12:00:35 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:22.042 12:00:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.042 00:06:22.042 real 0m2.465s 00:06:22.042 user 0m2.269s 00:06:22.042 sys 0m0.202s 00:06:22.042 12:00:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.042 12:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.042 ************************************ 00:06:22.042 END TEST accel_crc32c_C2 00:06:22.042 ************************************ 00:06:22.303 12:00:35 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:22.303 12:00:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:22.303 12:00:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.303 12:00:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.303 ************************************ 00:06:22.303 START TEST accel_copy 00:06:22.303 ************************************ 00:06:22.303 12:00:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:22.303 12:00:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.303 12:00:35 -- accel/accel.sh@17 -- # local accel_module 00:06:22.303 12:00:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:22.303 12:00:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:22.303 12:00:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.303 12:00:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.303 12:00:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.303 12:00:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.303 12:00:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.303 12:00:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.303 12:00:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.303 12:00:35 -- accel/accel.sh@42 -- # jq -r . 00:06:22.303 [2024-06-11 12:00:35.121972] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:22.303 [2024-06-11 12:00:35.122052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276313 ] 00:06:22.303 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.303 [2024-06-11 12:00:35.182920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.303 [2024-06-11 12:00:35.212849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.686 12:00:36 -- accel/accel.sh@18 -- # out=' 00:06:23.686 SPDK Configuration: 00:06:23.686 Core mask: 0x1 00:06:23.686 00:06:23.686 Accel Perf Configuration: 00:06:23.686 Workload Type: copy 00:06:23.686 Transfer size: 4096 bytes 00:06:23.686 Vector count 1 00:06:23.686 Module: software 00:06:23.686 Queue depth: 32 00:06:23.686 Allocate depth: 32 00:06:23.686 # threads/core: 1 00:06:23.686 Run time: 1 seconds 00:06:23.686 Verify: Yes 00:06:23.686 00:06:23.686 Running for 1 seconds... 00:06:23.686 00:06:23.686 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.686 ------------------------------------------------------------------------------------ 00:06:23.686 0,0 301984/s 1179 MiB/s 0 0 00:06:23.686 ==================================================================================== 00:06:23.686 Total 301984/s 1179 MiB/s 0 0' 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:23.686 12:00:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:23.686 12:00:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.686 12:00:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.686 12:00:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.686 12:00:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.686 12:00:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.686 12:00:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.686 12:00:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.686 12:00:36 -- accel/accel.sh@42 -- # jq -r . 00:06:23.686 [2024-06-11 12:00:36.351161] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:23.686 [2024-06-11 12:00:36.351233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276647 ] 00:06:23.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.686 [2024-06-11 12:00:36.413280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.686 [2024-06-11 12:00:36.441287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val= 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val= 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val=0x1 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val= 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val= 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val=copy 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val= 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val=software 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val=32 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val=32 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val=1 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val=Yes 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val= 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.686 12:00:36 -- accel/accel.sh@21 -- # val= 00:06:23.686 12:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.686 12:00:36 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 12:00:37 -- accel/accel.sh@21 -- # val= 00:06:24.628 12:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 12:00:37 -- accel/accel.sh@21 -- # val= 00:06:24.628 12:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 12:00:37 -- accel/accel.sh@21 -- # val= 00:06:24.628 12:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 12:00:37 -- accel/accel.sh@21 -- # val= 00:06:24.628 12:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 12:00:37 -- accel/accel.sh@21 -- # val= 00:06:24.628 12:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 12:00:37 -- accel/accel.sh@21 -- # val= 00:06:24.628 12:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # IFS=: 00:06:24.628 12:00:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.628 12:00:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.628 12:00:37 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:24.628 12:00:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.628 00:06:24.628 real 0m2.463s 00:06:24.628 user 0m2.269s 00:06:24.628 sys 0m0.199s 00:06:24.628 12:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.628 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.628 ************************************ 00:06:24.628 END TEST accel_copy 00:06:24.628 ************************************ 00:06:24.628 12:00:37 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.628 12:00:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:24.628 12:00:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.628 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.628 ************************************ 00:06:24.628 START TEST accel_fill 00:06:24.628 ************************************ 00:06:24.628 12:00:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.628 12:00:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.628 12:00:37 -- accel/accel.sh@17 -- # local accel_module 00:06:24.628 12:00:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.628 12:00:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.628 12:00:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.628 12:00:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.628 12:00:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.628 12:00:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.628 12:00:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.628 12:00:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.628 12:00:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.628 12:00:37 -- accel/accel.sh@42 -- # jq -r . 00:06:24.628 [2024-06-11 12:00:37.625572] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.628 [2024-06-11 12:00:37.625643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276943 ] 00:06:24.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.888 [2024-06-11 12:00:37.687220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.888 [2024-06-11 12:00:37.716332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.858 12:00:38 -- accel/accel.sh@18 -- # out=' 00:06:25.858 SPDK Configuration: 00:06:25.858 Core mask: 0x1 00:06:25.858 00:06:25.858 Accel Perf Configuration: 00:06:25.858 Workload Type: fill 00:06:25.858 Fill pattern: 0x80 00:06:25.858 Transfer size: 4096 bytes 00:06:25.858 Vector count 1 00:06:25.858 Module: software 00:06:25.858 Queue depth: 64 00:06:25.858 Allocate depth: 64 00:06:25.858 # threads/core: 1 00:06:25.858 Run time: 1 seconds 00:06:25.858 Verify: Yes 00:06:25.859 00:06:25.859 Running for 1 seconds... 00:06:25.859 00:06:25.859 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.859 ------------------------------------------------------------------------------------ 00:06:25.859 0,0 471232/s 1840 MiB/s 0 0 00:06:25.859 ==================================================================================== 00:06:25.859 Total 471232/s 1840 MiB/s 0 0' 00:06:25.859 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:25.859 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:25.859 12:00:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.859 12:00:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.859 12:00:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.859 12:00:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.859 12:00:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.859 12:00:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.859 12:00:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.859 12:00:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.859 12:00:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.859 12:00:38 -- accel/accel.sh@42 -- # jq -r . 00:06:25.859 [2024-06-11 12:00:38.854812] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:25.859 [2024-06-11 12:00:38.854904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277054 ] 00:06:26.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.166 [2024-06-11 12:00:38.916730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.166 [2024-06-11 12:00:38.945068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val= 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val= 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val=0x1 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val= 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val= 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val=fill 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val=0x80 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val= 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val=software 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val=64 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val=64 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val=1 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.166 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.166 12:00:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.166 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.167 12:00:38 -- accel/accel.sh@21 -- # val=Yes 00:06:26.167 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.167 12:00:38 -- accel/accel.sh@21 -- # val= 00:06:26.167 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.167 12:00:38 -- accel/accel.sh@21 -- # val= 00:06:26.167 12:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # IFS=: 00:06:26.167 12:00:38 -- accel/accel.sh@20 -- # read -r var val 00:06:27.111 12:00:40 -- accel/accel.sh@21 -- # val= 00:06:27.111 12:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # IFS=: 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.111 12:00:40 -- accel/accel.sh@21 -- # val= 00:06:27.111 12:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # IFS=: 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.111 12:00:40 -- accel/accel.sh@21 -- # val= 00:06:27.111 12:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # IFS=: 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.111 12:00:40 -- accel/accel.sh@21 -- # val= 00:06:27.111 12:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # IFS=: 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.111 12:00:40 -- accel/accel.sh@21 -- # val= 00:06:27.111 12:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # IFS=: 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.111 12:00:40 -- accel/accel.sh@21 -- # val= 00:06:27.111 12:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # IFS=: 00:06:27.111 12:00:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.111 12:00:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.111 12:00:40 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:27.111 12:00:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.111 00:06:27.111 real 0m2.468s 00:06:27.111 user 0m2.279s 00:06:27.111 sys 0m0.196s 00:06:27.111 12:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.111 12:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:27.111 ************************************ 00:06:27.111 END TEST accel_fill 00:06:27.111 ************************************ 00:06:27.111 12:00:40 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:27.111 12:00:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:27.111 12:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.111 12:00:40 -- common/autotest_common.sh@10 -- # set +x 00:06:27.111 ************************************ 00:06:27.111 START TEST accel_copy_crc32c 00:06:27.111 ************************************ 00:06:27.111 12:00:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:27.111 12:00:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.111 12:00:40 -- accel/accel.sh@17 -- # local accel_module 00:06:27.111 12:00:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:27.111 12:00:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:27.111 12:00:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.111 12:00:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.111 12:00:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.111 12:00:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.111 12:00:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.111 12:00:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.111 12:00:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.111 12:00:40 -- accel/accel.sh@42 -- # jq -r . 00:06:27.111 [2024-06-11 12:00:40.133393] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:27.111 [2024-06-11 12:00:40.133475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277377 ] 00:06:27.373 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.373 [2024-06-11 12:00:40.195197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.373 [2024-06-11 12:00:40.223601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.317 12:00:41 -- accel/accel.sh@18 -- # out=' 00:06:28.317 SPDK Configuration: 00:06:28.317 Core mask: 0x1 00:06:28.317 00:06:28.317 Accel Perf Configuration: 00:06:28.317 Workload Type: copy_crc32c 00:06:28.317 CRC-32C seed: 0 00:06:28.317 Vector size: 4096 bytes 00:06:28.317 Transfer size: 4096 bytes 00:06:28.317 Vector count 1 00:06:28.317 Module: software 00:06:28.317 Queue depth: 32 00:06:28.317 Allocate depth: 32 00:06:28.317 # threads/core: 1 00:06:28.317 Run time: 1 seconds 00:06:28.317 Verify: Yes 00:06:28.317 00:06:28.317 Running for 1 seconds... 00:06:28.317 00:06:28.317 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.317 ------------------------------------------------------------------------------------ 00:06:28.317 0,0 248480/s 970 MiB/s 0 0 00:06:28.317 ==================================================================================== 00:06:28.317 Total 248480/s 970 MiB/s 0 0' 00:06:28.317 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.317 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.317 12:00:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:28.317 12:00:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:28.317 12:00:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.317 12:00:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.317 12:00:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.317 12:00:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.317 12:00:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.317 12:00:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.317 12:00:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.317 12:00:41 -- accel/accel.sh@42 -- # jq -r . 00:06:28.579 [2024-06-11 12:00:41.362663] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:28.579 [2024-06-11 12:00:41.362747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277712 ] 00:06:28.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.579 [2024-06-11 12:00:41.424733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.579 [2024-06-11 12:00:41.452823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val= 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val= 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=0x1 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val= 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val= 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=0 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val= 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=software 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=32 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=32 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=1 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val=Yes 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val= 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:28.579 12:00:41 -- accel/accel.sh@21 -- # val= 00:06:28.579 12:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # IFS=: 00:06:28.579 12:00:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.962 12:00:42 -- accel/accel.sh@21 -- # val= 00:06:29.962 12:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.962 12:00:42 -- accel/accel.sh@21 -- # val= 00:06:29.962 12:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.962 12:00:42 -- accel/accel.sh@21 -- # val= 00:06:29.962 12:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.962 12:00:42 -- accel/accel.sh@21 -- # val= 00:06:29.962 12:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.962 12:00:42 -- accel/accel.sh@21 -- # val= 00:06:29.962 12:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.962 12:00:42 -- accel/accel.sh@21 -- # val= 00:06:29.962 12:00:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.962 12:00:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.962 12:00:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.962 12:00:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:29.962 12:00:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.962 00:06:29.962 real 0m2.463s 00:06:29.962 user 0m2.263s 00:06:29.962 sys 0m0.207s 00:06:29.962 12:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.962 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.962 ************************************ 00:06:29.962 END TEST accel_copy_crc32c 00:06:29.962 ************************************ 00:06:29.962 12:00:42 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.962 12:00:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:29.962 12:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.962 12:00:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.962 ************************************ 00:06:29.962 START TEST accel_copy_crc32c_C2 00:06:29.962 ************************************ 00:06:29.962 12:00:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.962 12:00:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.962 12:00:42 -- accel/accel.sh@17 -- # local accel_module 00:06:29.962 12:00:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:29.962 12:00:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:29.962 12:00:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.962 12:00:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.962 12:00:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.962 12:00:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.962 12:00:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.962 12:00:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.962 12:00:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.962 12:00:42 -- accel/accel.sh@42 -- # jq -r . 00:06:29.962 [2024-06-11 12:00:42.641811] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.962 [2024-06-11 12:00:42.641906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277975 ] 00:06:29.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.962 [2024-06-11 12:00:42.705739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.962 [2024-06-11 12:00:42.735813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.904 12:00:43 -- accel/accel.sh@18 -- # out=' 00:06:30.904 SPDK Configuration: 00:06:30.904 Core mask: 0x1 00:06:30.904 00:06:30.904 Accel Perf Configuration: 00:06:30.904 Workload Type: copy_crc32c 00:06:30.904 CRC-32C seed: 0 00:06:30.904 Vector size: 4096 bytes 00:06:30.904 Transfer size: 8192 bytes 00:06:30.904 Vector count 2 00:06:30.904 Module: software 00:06:30.904 Queue depth: 32 00:06:30.904 Allocate depth: 32 00:06:30.904 # threads/core: 1 00:06:30.904 Run time: 1 seconds 00:06:30.904 Verify: Yes 00:06:30.904 00:06:30.904 Running for 1 seconds... 00:06:30.904 00:06:30.904 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.904 ------------------------------------------------------------------------------------ 00:06:30.904 0,0 187104/s 1461 MiB/s 0 0 00:06:30.904 ==================================================================================== 00:06:30.904 Total 187104/s 730 MiB/s 0 0' 00:06:30.904 12:00:43 -- accel/accel.sh@20 -- # IFS=: 00:06:30.904 12:00:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.904 12:00:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:30.904 12:00:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:30.904 12:00:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.904 12:00:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.904 12:00:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.904 12:00:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.904 12:00:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.904 12:00:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.904 12:00:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.904 12:00:43 -- accel/accel.sh@42 -- # jq -r . 00:06:30.904 [2024-06-11 12:00:43.873441] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:30.904 [2024-06-11 12:00:43.873522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278100 ] 00:06:30.904 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.904 [2024-06-11 12:00:43.936156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.166 [2024-06-11 12:00:43.964414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.166 12:00:43 -- accel/accel.sh@21 -- # val= 00:06:31.166 12:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:43 -- accel/accel.sh@21 -- # val= 00:06:31.166 12:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:43 -- accel/accel.sh@21 -- # val=0x1 00:06:31.166 12:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:43 -- accel/accel.sh@21 -- # val= 00:06:31.166 12:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:43 -- accel/accel.sh@21 -- # val= 00:06:31.166 12:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:43 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:43 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val=0 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val= 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val=software 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val=32 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val=32 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val=1 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val=Yes 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val= 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.166 12:00:44 -- accel/accel.sh@21 -- # val= 00:06:31.166 12:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # IFS=: 00:06:31.166 12:00:44 -- accel/accel.sh@20 -- # read -r var val 00:06:32.108 12:00:45 -- accel/accel.sh@21 -- # val= 00:06:32.108 12:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:32.108 12:00:45 -- accel/accel.sh@21 -- # val= 00:06:32.108 12:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:32.108 12:00:45 -- accel/accel.sh@21 -- # val= 00:06:32.108 12:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:32.108 12:00:45 -- accel/accel.sh@21 -- # val= 00:06:32.108 12:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:32.108 12:00:45 -- accel/accel.sh@21 -- # val= 00:06:32.108 12:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:32.108 12:00:45 -- accel/accel.sh@21 -- # val= 00:06:32.108 12:00:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # IFS=: 00:06:32.108 12:00:45 -- accel/accel.sh@20 -- # read -r var val 00:06:32.108 12:00:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.108 12:00:45 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:32.108 12:00:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.108 00:06:32.108 real 0m2.468s 00:06:32.108 user 0m2.275s 00:06:32.108 sys 0m0.201s 00:06:32.108 12:00:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.108 12:00:45 -- common/autotest_common.sh@10 -- # set +x 00:06:32.108 ************************************ 00:06:32.108 END TEST accel_copy_crc32c_C2 00:06:32.108 ************************************ 00:06:32.108 12:00:45 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:32.108 12:00:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:32.108 12:00:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.109 12:00:45 -- common/autotest_common.sh@10 -- # set +x 00:06:32.109 ************************************ 00:06:32.109 START TEST accel_dualcast 00:06:32.109 ************************************ 00:06:32.109 12:00:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:32.109 12:00:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.109 12:00:45 -- accel/accel.sh@17 -- # local accel_module 00:06:32.109 12:00:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:32.109 12:00:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.109 12:00:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.109 12:00:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:32.109 12:00:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.109 12:00:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.109 12:00:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.109 12:00:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.109 12:00:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.109 12:00:45 -- accel/accel.sh@42 -- # jq -r . 00:06:32.369 [2024-06-11 12:00:45.148357] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.369 [2024-06-11 12:00:45.148434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278435 ] 00:06:32.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.369 [2024-06-11 12:00:45.226623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.369 [2024-06-11 12:00:45.254948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.754 12:00:46 -- accel/accel.sh@18 -- # out=' 00:06:33.754 SPDK Configuration: 00:06:33.754 Core mask: 0x1 00:06:33.754 00:06:33.754 Accel Perf Configuration: 00:06:33.754 Workload Type: dualcast 00:06:33.754 Transfer size: 4096 bytes 00:06:33.754 Vector count 1 00:06:33.754 Module: software 00:06:33.754 Queue depth: 32 00:06:33.754 Allocate depth: 32 00:06:33.754 # threads/core: 1 00:06:33.754 Run time: 1 seconds 00:06:33.754 Verify: Yes 00:06:33.754 00:06:33.754 Running for 1 seconds... 00:06:33.754 00:06:33.754 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.754 ------------------------------------------------------------------------------------ 00:06:33.754 0,0 363584/s 1420 MiB/s 0 0 00:06:33.754 ==================================================================================== 00:06:33.754 Total 363584/s 1420 MiB/s 0 0' 00:06:33.754 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.754 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.754 12:00:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:33.754 12:00:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:33.754 12:00:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.754 12:00:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.754 12:00:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.755 12:00:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.755 12:00:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.755 12:00:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.755 12:00:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.755 12:00:46 -- accel/accel.sh@42 -- # jq -r . 00:06:33.755 [2024-06-11 12:00:46.394074] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:33.755 [2024-06-11 12:00:46.394161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278768 ] 00:06:33.755 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.755 [2024-06-11 12:00:46.456931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.755 [2024-06-11 12:00:46.484660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val= 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val= 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val=0x1 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val= 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val= 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val=dualcast 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val= 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val=software 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val=32 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val=32 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val=1 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val=Yes 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val= 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.755 12:00:46 -- accel/accel.sh@21 -- # val= 00:06:33.755 12:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # IFS=: 00:06:33.755 12:00:46 -- accel/accel.sh@20 -- # read -r var val 00:06:34.699 12:00:47 -- accel/accel.sh@21 -- # val= 00:06:34.699 12:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.699 12:00:47 -- accel/accel.sh@21 -- # val= 00:06:34.699 12:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.699 12:00:47 -- accel/accel.sh@21 -- # val= 00:06:34.699 12:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.699 12:00:47 -- accel/accel.sh@21 -- # val= 00:06:34.699 12:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.699 12:00:47 -- accel/accel.sh@21 -- # val= 00:06:34.699 12:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.699 12:00:47 -- accel/accel.sh@21 -- # val= 00:06:34.699 12:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # IFS=: 00:06:34.699 12:00:47 -- accel/accel.sh@20 -- # read -r var val 00:06:34.699 12:00:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.699 12:00:47 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:34.699 12:00:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.699 00:06:34.699 real 0m2.479s 00:06:34.699 user 0m2.275s 00:06:34.699 sys 0m0.210s 00:06:34.699 12:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.699 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.699 ************************************ 00:06:34.699 END TEST accel_dualcast 00:06:34.699 ************************************ 00:06:34.699 12:00:47 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:34.699 12:00:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:34.699 12:00:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.699 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.699 ************************************ 00:06:34.699 START TEST accel_compare 00:06:34.699 ************************************ 00:06:34.699 12:00:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:34.699 12:00:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.699 12:00:47 -- accel/accel.sh@17 -- # local accel_module 00:06:34.699 12:00:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:34.699 12:00:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:34.699 12:00:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.699 12:00:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.699 12:00:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.699 12:00:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.699 12:00:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.699 12:00:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.699 12:00:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.699 12:00:47 -- accel/accel.sh@42 -- # jq -r . 00:06:34.699 [2024-06-11 12:00:47.654525] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:34.699 [2024-06-11 12:00:47.654582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279024 ] 00:06:34.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.699 [2024-06-11 12:00:47.712451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.960 [2024-06-11 12:00:47.740249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.901 12:00:48 -- accel/accel.sh@18 -- # out=' 00:06:35.901 SPDK Configuration: 00:06:35.901 Core mask: 0x1 00:06:35.901 00:06:35.901 Accel Perf Configuration: 00:06:35.901 Workload Type: compare 00:06:35.901 Transfer size: 4096 bytes 00:06:35.901 Vector count 1 00:06:35.901 Module: software 00:06:35.901 Queue depth: 32 00:06:35.901 Allocate depth: 32 00:06:35.901 # threads/core: 1 00:06:35.901 Run time: 1 seconds 00:06:35.901 Verify: Yes 00:06:35.901 00:06:35.901 Running for 1 seconds... 00:06:35.901 00:06:35.901 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.901 ------------------------------------------------------------------------------------ 00:06:35.901 0,0 437120/s 1707 MiB/s 0 0 00:06:35.902 ==================================================================================== 00:06:35.902 Total 437120/s 1707 MiB/s 0 0' 00:06:35.902 12:00:48 -- accel/accel.sh@20 -- # IFS=: 00:06:35.902 12:00:48 -- accel/accel.sh@20 -- # read -r var val 00:06:35.902 12:00:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:35.902 12:00:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:35.902 12:00:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.902 12:00:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.902 12:00:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.902 12:00:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.902 12:00:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.902 12:00:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.902 12:00:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.902 12:00:48 -- accel/accel.sh@42 -- # jq -r . 00:06:35.902 [2024-06-11 12:00:48.878248] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:35.902 [2024-06-11 12:00:48.878323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279142 ] 00:06:35.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.162 [2024-06-11 12:00:48.940006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.162 [2024-06-11 12:00:48.968431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.162 12:00:48 -- accel/accel.sh@21 -- # val= 00:06:36.162 12:00:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val= 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val=0x1 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val= 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val= 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val=compare 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val= 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val=software 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val=32 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val=32 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val=1 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val=Yes 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val= 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:36.162 12:00:49 -- accel/accel.sh@21 -- # val= 00:06:36.162 12:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # IFS=: 00:06:36.162 12:00:49 -- accel/accel.sh@20 -- # read -r var val 00:06:37.103 12:00:50 -- accel/accel.sh@21 -- # val= 00:06:37.103 12:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.103 12:00:50 -- accel/accel.sh@21 -- # val= 00:06:37.103 12:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.103 12:00:50 -- accel/accel.sh@21 -- # val= 00:06:37.103 12:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.103 12:00:50 -- accel/accel.sh@21 -- # val= 00:06:37.103 12:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.103 12:00:50 -- accel/accel.sh@21 -- # val= 00:06:37.103 12:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.103 12:00:50 -- accel/accel.sh@21 -- # val= 00:06:37.103 12:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # IFS=: 00:06:37.103 12:00:50 -- accel/accel.sh@20 -- # read -r var val 00:06:37.103 12:00:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.103 12:00:50 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:37.103 12:00:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.103 00:06:37.103 real 0m2.447s 00:06:37.103 user 0m2.260s 00:06:37.103 sys 0m0.193s 00:06:37.103 12:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.103 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.103 ************************************ 00:06:37.103 END TEST accel_compare 00:06:37.103 ************************************ 00:06:37.103 12:00:50 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:37.103 12:00:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:37.103 12:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.103 12:00:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.103 ************************************ 00:06:37.103 START TEST accel_xor 00:06:37.103 ************************************ 00:06:37.103 12:00:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:37.103 12:00:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.103 12:00:50 -- accel/accel.sh@17 -- # local accel_module 00:06:37.103 12:00:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:37.103 12:00:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:37.103 12:00:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.103 12:00:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.103 12:00:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.103 12:00:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.103 12:00:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.363 12:00:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.363 12:00:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.363 12:00:50 -- accel/accel.sh@42 -- # jq -r . 00:06:37.363 [2024-06-11 12:00:50.157698] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:37.363 [2024-06-11 12:00:50.157771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279493 ] 00:06:37.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.363 [2024-06-11 12:00:50.219549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.363 [2024-06-11 12:00:50.247222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.746 12:00:51 -- accel/accel.sh@18 -- # out=' 00:06:38.746 SPDK Configuration: 00:06:38.746 Core mask: 0x1 00:06:38.746 00:06:38.746 Accel Perf Configuration: 00:06:38.746 Workload Type: xor 00:06:38.746 Source buffers: 2 00:06:38.746 Transfer size: 4096 bytes 00:06:38.746 Vector count 1 00:06:38.746 Module: software 00:06:38.746 Queue depth: 32 00:06:38.746 Allocate depth: 32 00:06:38.746 # threads/core: 1 00:06:38.746 Run time: 1 seconds 00:06:38.746 Verify: Yes 00:06:38.746 00:06:38.746 Running for 1 seconds... 00:06:38.746 00:06:38.747 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.747 ------------------------------------------------------------------------------------ 00:06:38.747 0,0 362016/s 1414 MiB/s 0 0 00:06:38.747 ==================================================================================== 00:06:38.747 Total 362016/s 1414 MiB/s 0 0' 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:38.747 12:00:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:38.747 12:00:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.747 12:00:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.747 12:00:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.747 12:00:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.747 12:00:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.747 12:00:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.747 12:00:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.747 12:00:51 -- accel/accel.sh@42 -- # jq -r . 00:06:38.747 [2024-06-11 12:00:51.385546] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:38.747 [2024-06-11 12:00:51.385619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279827 ] 00:06:38.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.747 [2024-06-11 12:00:51.446645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.747 [2024-06-11 12:00:51.473950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val= 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val= 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=0x1 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val= 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val= 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=xor 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=2 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val= 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=software 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=32 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=32 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=1 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val=Yes 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val= 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:38.747 12:00:51 -- accel/accel.sh@21 -- # val= 00:06:38.747 12:00:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # IFS=: 00:06:38.747 12:00:51 -- accel/accel.sh@20 -- # read -r var val 00:06:39.687 12:00:52 -- accel/accel.sh@21 -- # val= 00:06:39.687 12:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.687 12:00:52 -- accel/accel.sh@21 -- # val= 00:06:39.687 12:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.687 12:00:52 -- accel/accel.sh@21 -- # val= 00:06:39.687 12:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.687 12:00:52 -- accel/accel.sh@21 -- # val= 00:06:39.687 12:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.687 12:00:52 -- accel/accel.sh@21 -- # val= 00:06:39.687 12:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.687 12:00:52 -- accel/accel.sh@21 -- # val= 00:06:39.687 12:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # IFS=: 00:06:39.687 12:00:52 -- accel/accel.sh@20 -- # read -r var val 00:06:39.687 12:00:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.687 12:00:52 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:39.687 12:00:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.687 00:06:39.687 real 0m2.459s 00:06:39.687 user 0m2.270s 00:06:39.687 sys 0m0.194s 00:06:39.687 12:00:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.687 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:06:39.687 ************************************ 00:06:39.687 END TEST accel_xor 00:06:39.687 ************************************ 00:06:39.687 12:00:52 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:39.687 12:00:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:39.687 12:00:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.687 12:00:52 -- common/autotest_common.sh@10 -- # set +x 00:06:39.687 ************************************ 00:06:39.687 START TEST accel_xor 00:06:39.687 ************************************ 00:06:39.687 12:00:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:39.687 12:00:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.687 12:00:52 -- accel/accel.sh@17 -- # local accel_module 00:06:39.687 12:00:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:39.687 12:00:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:39.687 12:00:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.687 12:00:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.687 12:00:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.687 12:00:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.687 12:00:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.687 12:00:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.687 12:00:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.687 12:00:52 -- accel/accel.sh@42 -- # jq -r . 00:06:39.687 [2024-06-11 12:00:52.661196] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:39.687 [2024-06-11 12:00:52.661289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280037 ] 00:06:39.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.947 [2024-06-11 12:00:52.723559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.947 [2024-06-11 12:00:52.754136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.886 12:00:53 -- accel/accel.sh@18 -- # out=' 00:06:40.886 SPDK Configuration: 00:06:40.886 Core mask: 0x1 00:06:40.886 00:06:40.886 Accel Perf Configuration: 00:06:40.886 Workload Type: xor 00:06:40.886 Source buffers: 3 00:06:40.886 Transfer size: 4096 bytes 00:06:40.886 Vector count 1 00:06:40.886 Module: software 00:06:40.886 Queue depth: 32 00:06:40.886 Allocate depth: 32 00:06:40.886 # threads/core: 1 00:06:40.886 Run time: 1 seconds 00:06:40.886 Verify: Yes 00:06:40.886 00:06:40.886 Running for 1 seconds... 00:06:40.886 00:06:40.886 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.886 ------------------------------------------------------------------------------------ 00:06:40.886 0,0 344800/s 1346 MiB/s 0 0 00:06:40.886 ==================================================================================== 00:06:40.886 Total 344800/s 1346 MiB/s 0 0' 00:06:40.886 12:00:53 -- accel/accel.sh@20 -- # IFS=: 00:06:40.886 12:00:53 -- accel/accel.sh@20 -- # read -r var val 00:06:40.886 12:00:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:40.886 12:00:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:40.886 12:00:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.886 12:00:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.886 12:00:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.886 12:00:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.886 12:00:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.886 12:00:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.886 12:00:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.886 12:00:53 -- accel/accel.sh@42 -- # jq -r . 00:06:40.886 [2024-06-11 12:00:53.893803] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.886 [2024-06-11 12:00:53.893896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280195 ] 00:06:41.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.145 [2024-06-11 12:00:53.957738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.146 [2024-06-11 12:00:53.986500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val= 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val= 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=0x1 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val= 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val= 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=xor 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=3 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val= 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=software 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=32 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=32 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=1 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val=Yes 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val= 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:41.146 12:00:54 -- accel/accel.sh@21 -- # val= 00:06:41.146 12:00:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # IFS=: 00:06:41.146 12:00:54 -- accel/accel.sh@20 -- # read -r var val 00:06:42.085 12:00:55 -- accel/accel.sh@21 -- # val= 00:06:42.085 12:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.085 12:00:55 -- accel/accel.sh@21 -- # val= 00:06:42.085 12:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.085 12:00:55 -- accel/accel.sh@21 -- # val= 00:06:42.085 12:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.085 12:00:55 -- accel/accel.sh@21 -- # val= 00:06:42.085 12:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.085 12:00:55 -- accel/accel.sh@21 -- # val= 00:06:42.085 12:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.085 12:00:55 -- accel/accel.sh@21 -- # val= 00:06:42.085 12:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # IFS=: 00:06:42.085 12:00:55 -- accel/accel.sh@20 -- # read -r var val 00:06:42.085 12:00:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.085 12:00:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:42.085 12:00:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.085 00:06:42.085 real 0m2.469s 00:06:42.085 user 0m2.267s 00:06:42.085 sys 0m0.208s 00:06:42.085 12:00:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.085 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.085 ************************************ 00:06:42.085 END TEST accel_xor 00:06:42.085 ************************************ 00:06:42.346 12:00:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:42.346 12:00:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:42.346 12:00:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.346 12:00:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.346 ************************************ 00:06:42.346 START TEST accel_dif_verify 00:06:42.346 ************************************ 00:06:42.346 12:00:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:42.346 12:00:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.346 12:00:55 -- accel/accel.sh@17 -- # local accel_module 00:06:42.346 12:00:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:42.346 12:00:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:42.346 12:00:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.346 12:00:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.346 12:00:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.346 12:00:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.346 12:00:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.346 12:00:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.346 12:00:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.346 12:00:55 -- accel/accel.sh@42 -- # jq -r . 00:06:42.346 [2024-06-11 12:00:55.170580] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:42.346 [2024-06-11 12:00:55.170667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280552 ] 00:06:42.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.346 [2024-06-11 12:00:55.232996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.346 [2024-06-11 12:00:55.261659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.731 12:00:56 -- accel/accel.sh@18 -- # out=' 00:06:43.731 SPDK Configuration: 00:06:43.731 Core mask: 0x1 00:06:43.731 00:06:43.731 Accel Perf Configuration: 00:06:43.731 Workload Type: dif_verify 00:06:43.731 Vector size: 4096 bytes 00:06:43.731 Transfer size: 4096 bytes 00:06:43.731 Block size: 512 bytes 00:06:43.731 Metadata size: 8 bytes 00:06:43.731 Vector count 1 00:06:43.731 Module: software 00:06:43.731 Queue depth: 32 00:06:43.731 Allocate depth: 32 00:06:43.731 # threads/core: 1 00:06:43.731 Run time: 1 seconds 00:06:43.731 Verify: No 00:06:43.731 00:06:43.731 Running for 1 seconds... 00:06:43.731 00:06:43.731 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.731 ------------------------------------------------------------------------------------ 00:06:43.731 0,0 94688/s 375 MiB/s 0 0 00:06:43.731 ==================================================================================== 00:06:43.731 Total 94688/s 369 MiB/s 0 0' 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:43.731 12:00:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:43.731 12:00:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.731 12:00:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.731 12:00:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.731 12:00:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.731 12:00:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.731 12:00:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.731 12:00:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.731 12:00:56 -- accel/accel.sh@42 -- # jq -r . 00:06:43.731 [2024-06-11 12:00:56.398957] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:43.731 [2024-06-11 12:00:56.399040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280889 ] 00:06:43.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.731 [2024-06-11 12:00:56.459898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.731 [2024-06-11 12:00:56.487313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val= 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val= 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val=0x1 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val= 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val= 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val=dif_verify 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 12:00:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.731 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val= 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val=software 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val=32 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val=32 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val=1 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val=No 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val= 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:43.732 12:00:56 -- accel/accel.sh@21 -- # val= 00:06:43.732 12:00:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # IFS=: 00:06:43.732 12:00:56 -- accel/accel.sh@20 -- # read -r var val 00:06:44.672 12:00:57 -- accel/accel.sh@21 -- # val= 00:06:44.672 12:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # IFS=: 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # read -r var val 00:06:44.672 12:00:57 -- accel/accel.sh@21 -- # val= 00:06:44.672 12:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # IFS=: 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # read -r var val 00:06:44.672 12:00:57 -- accel/accel.sh@21 -- # val= 00:06:44.672 12:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # IFS=: 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # read -r var val 00:06:44.672 12:00:57 -- accel/accel.sh@21 -- # val= 00:06:44.672 12:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # IFS=: 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # read -r var val 00:06:44.672 12:00:57 -- accel/accel.sh@21 -- # val= 00:06:44.672 12:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # IFS=: 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # read -r var val 00:06:44.672 12:00:57 -- accel/accel.sh@21 -- # val= 00:06:44.672 12:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # IFS=: 00:06:44.672 12:00:57 -- accel/accel.sh@20 -- # read -r var val 00:06:44.672 12:00:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.672 12:00:57 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:44.672 12:00:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.672 00:06:44.672 real 0m2.461s 00:06:44.672 user 0m2.267s 00:06:44.672 sys 0m0.203s 00:06:44.672 12:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.672 12:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:44.672 ************************************ 00:06:44.672 END TEST accel_dif_verify 00:06:44.672 ************************************ 00:06:44.672 12:00:57 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:44.672 12:00:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:44.672 12:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.672 12:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:44.672 ************************************ 00:06:44.672 START TEST accel_dif_generate 00:06:44.672 ************************************ 00:06:44.672 12:00:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:44.672 12:00:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.672 12:00:57 -- accel/accel.sh@17 -- # local accel_module 00:06:44.672 12:00:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:44.672 12:00:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:44.672 12:00:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.672 12:00:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.672 12:00:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.672 12:00:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.672 12:00:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.672 12:00:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.672 12:00:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.672 12:00:57 -- accel/accel.sh@42 -- # jq -r . 00:06:44.672 [2024-06-11 12:00:57.671161] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:44.672 [2024-06-11 12:00:57.671237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281075 ] 00:06:44.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.932 [2024-06-11 12:00:57.734525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.932 [2024-06-11 12:00:57.765403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.871 12:00:58 -- accel/accel.sh@18 -- # out=' 00:06:45.871 SPDK Configuration: 00:06:45.871 Core mask: 0x1 00:06:45.872 00:06:45.872 Accel Perf Configuration: 00:06:45.872 Workload Type: dif_generate 00:06:45.872 Vector size: 4096 bytes 00:06:45.872 Transfer size: 4096 bytes 00:06:45.872 Block size: 512 bytes 00:06:45.872 Metadata size: 8 bytes 00:06:45.872 Vector count 1 00:06:45.872 Module: software 00:06:45.872 Queue depth: 32 00:06:45.872 Allocate depth: 32 00:06:45.872 # threads/core: 1 00:06:45.872 Run time: 1 seconds 00:06:45.872 Verify: No 00:06:45.872 00:06:45.872 Running for 1 seconds... 00:06:45.872 00:06:45.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.872 ------------------------------------------------------------------------------------ 00:06:45.872 0,0 114624/s 454 MiB/s 0 0 00:06:45.872 ==================================================================================== 00:06:45.872 Total 114624/s 447 MiB/s 0 0' 00:06:45.872 12:00:58 -- accel/accel.sh@20 -- # IFS=: 00:06:45.872 12:00:58 -- accel/accel.sh@20 -- # read -r var val 00:06:45.872 12:00:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:45.872 12:00:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:45.872 12:00:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.872 12:00:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.872 12:00:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.872 12:00:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.872 12:00:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.872 12:00:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.872 12:00:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.872 12:00:58 -- accel/accel.sh@42 -- # jq -r . 00:06:45.872 [2024-06-11 12:00:58.904445] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:45.872 [2024-06-11 12:00:58.904524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281256 ] 00:06:46.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.133 [2024-06-11 12:00:58.966048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.133 [2024-06-11 12:00:58.994090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val= 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val= 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val=0x1 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val= 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val= 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val=dif_generate 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val= 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val=software 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val=32 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val=32 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val=1 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val=No 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val= 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:46.133 12:00:59 -- accel/accel.sh@21 -- # val= 00:06:46.133 12:00:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # IFS=: 00:06:46.133 12:00:59 -- accel/accel.sh@20 -- # read -r var val 00:06:47.073 12:01:00 -- accel/accel.sh@21 -- # val= 00:06:47.333 12:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.333 12:01:00 -- accel/accel.sh@21 -- # val= 00:06:47.333 12:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.333 12:01:00 -- accel/accel.sh@21 -- # val= 00:06:47.333 12:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.333 12:01:00 -- accel/accel.sh@21 -- # val= 00:06:47.333 12:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.333 12:01:00 -- accel/accel.sh@21 -- # val= 00:06:47.333 12:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.333 12:01:00 -- accel/accel.sh@21 -- # val= 00:06:47.333 12:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # IFS=: 00:06:47.333 12:01:00 -- accel/accel.sh@20 -- # read -r var val 00:06:47.333 12:01:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.333 12:01:00 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:47.333 12:01:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.333 00:06:47.333 real 0m2.469s 00:06:47.333 user 0m2.270s 00:06:47.333 sys 0m0.207s 00:06:47.333 12:01:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.333 12:01:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.333 ************************************ 00:06:47.333 END TEST accel_dif_generate 00:06:47.333 ************************************ 00:06:47.333 12:01:00 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:47.333 12:01:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:47.333 12:01:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.333 12:01:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.333 ************************************ 00:06:47.333 START TEST accel_dif_generate_copy 00:06:47.333 ************************************ 00:06:47.333 12:01:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:47.333 12:01:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.333 12:01:00 -- accel/accel.sh@17 -- # local accel_module 00:06:47.333 12:01:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:47.333 12:01:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:47.333 12:01:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.333 12:01:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.333 12:01:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.333 12:01:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.333 12:01:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.333 12:01:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.333 12:01:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.333 12:01:00 -- accel/accel.sh@42 -- # jq -r . 00:06:47.333 [2024-06-11 12:01:00.180327] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:47.333 [2024-06-11 12:01:00.180409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281605 ] 00:06:47.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.333 [2024-06-11 12:01:00.242034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.333 [2024-06-11 12:01:00.270524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.715 12:01:01 -- accel/accel.sh@18 -- # out=' 00:06:48.715 SPDK Configuration: 00:06:48.715 Core mask: 0x1 00:06:48.715 00:06:48.715 Accel Perf Configuration: 00:06:48.715 Workload Type: dif_generate_copy 00:06:48.715 Vector size: 4096 bytes 00:06:48.715 Transfer size: 4096 bytes 00:06:48.715 Vector count 1 00:06:48.715 Module: software 00:06:48.715 Queue depth: 32 00:06:48.715 Allocate depth: 32 00:06:48.715 # threads/core: 1 00:06:48.715 Run time: 1 seconds 00:06:48.715 Verify: No 00:06:48.715 00:06:48.715 Running for 1 seconds... 00:06:48.715 00:06:48.715 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.715 ------------------------------------------------------------------------------------ 00:06:48.715 0,0 87680/s 347 MiB/s 0 0 00:06:48.715 ==================================================================================== 00:06:48.715 Total 87680/s 342 MiB/s 0 0' 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:48.715 12:01:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:48.715 12:01:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.715 12:01:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.715 12:01:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.715 12:01:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.715 12:01:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.715 12:01:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.715 12:01:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.715 12:01:01 -- accel/accel.sh@42 -- # jq -r . 00:06:48.715 [2024-06-11 12:01:01.408467] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:48.715 [2024-06-11 12:01:01.408539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281941 ] 00:06:48.715 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.715 [2024-06-11 12:01:01.469749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.715 [2024-06-11 12:01:01.498497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val= 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val= 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val=0x1 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val= 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val= 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val= 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val=software 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val=32 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val=32 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val=1 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val=No 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val= 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:48.715 12:01:01 -- accel/accel.sh@21 -- # val= 00:06:48.715 12:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # IFS=: 00:06:48.715 12:01:01 -- accel/accel.sh@20 -- # read -r var val 00:06:49.657 12:01:02 -- accel/accel.sh@21 -- # val= 00:06:49.658 12:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.658 12:01:02 -- accel/accel.sh@21 -- # val= 00:06:49.658 12:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.658 12:01:02 -- accel/accel.sh@21 -- # val= 00:06:49.658 12:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.658 12:01:02 -- accel/accel.sh@21 -- # val= 00:06:49.658 12:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.658 12:01:02 -- accel/accel.sh@21 -- # val= 00:06:49.658 12:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.658 12:01:02 -- accel/accel.sh@21 -- # val= 00:06:49.658 12:01:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # IFS=: 00:06:49.658 12:01:02 -- accel/accel.sh@20 -- # read -r var val 00:06:49.658 12:01:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.658 12:01:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:49.658 12:01:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.658 00:06:49.658 real 0m2.461s 00:06:49.658 user 0m2.257s 00:06:49.658 sys 0m0.211s 00:06:49.658 12:01:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.658 12:01:02 -- common/autotest_common.sh@10 -- # set +x 00:06:49.658 ************************************ 00:06:49.658 END TEST accel_dif_generate_copy 00:06:49.658 ************************************ 00:06:49.658 12:01:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:49.658 12:01:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.658 12:01:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:49.658 12:01:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.658 12:01:02 -- common/autotest_common.sh@10 -- # set +x 00:06:49.658 ************************************ 00:06:49.658 START TEST accel_comp 00:06:49.658 ************************************ 00:06:49.658 12:01:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.658 12:01:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.658 12:01:02 -- accel/accel.sh@17 -- # local accel_module 00:06:49.658 12:01:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.658 12:01:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.658 12:01:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.658 12:01:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.658 12:01:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.658 12:01:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.658 12:01:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.658 12:01:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.658 12:01:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.658 12:01:02 -- accel/accel.sh@42 -- # jq -r . 00:06:49.658 [2024-06-11 12:01:02.683047] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:49.658 [2024-06-11 12:01:02.683117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282099 ] 00:06:49.918 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.918 [2024-06-11 12:01:02.744186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.918 [2024-06-11 12:01:02.772269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.858 12:01:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.858 00:06:50.858 SPDK Configuration: 00:06:50.858 Core mask: 0x1 00:06:50.858 00:06:50.858 Accel Perf Configuration: 00:06:50.858 Workload Type: compress 00:06:50.858 Transfer size: 4096 bytes 00:06:50.858 Vector count 1 00:06:50.858 Module: software 00:06:50.858 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.858 Queue depth: 32 00:06:50.858 Allocate depth: 32 00:06:50.858 # threads/core: 1 00:06:50.858 Run time: 1 seconds 00:06:50.858 Verify: No 00:06:50.858 00:06:50.858 Running for 1 seconds... 00:06:50.858 00:06:50.858 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.858 ------------------------------------------------------------------------------------ 00:06:50.858 0,0 47712/s 198 MiB/s 0 0 00:06:50.858 ==================================================================================== 00:06:50.858 Total 47712/s 186 MiB/s 0 0' 00:06:50.858 12:01:03 -- accel/accel.sh@20 -- # IFS=: 00:06:50.858 12:01:03 -- accel/accel.sh@20 -- # read -r var val 00:06:50.858 12:01:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.859 12:01:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.859 12:01:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.859 12:01:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.859 12:01:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.859 12:01:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.859 12:01:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.859 12:01:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.859 12:01:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.859 12:01:03 -- accel/accel.sh@42 -- # jq -r . 00:06:51.119 [2024-06-11 12:01:03.913581] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:51.119 [2024-06-11 12:01:03.913657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282312 ] 00:06:51.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.119 [2024-06-11 12:01:03.975120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.119 [2024-06-11 12:01:04.002571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val=0x1 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val=compress 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val=software 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val=32 00:06:51.119 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.119 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.119 12:01:04 -- accel/accel.sh@21 -- # val=32 00:06:51.120 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.120 12:01:04 -- accel/accel.sh@21 -- # val=1 00:06:51.120 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.120 12:01:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.120 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.120 12:01:04 -- accel/accel.sh@21 -- # val=No 00:06:51.120 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.120 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.120 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:51.120 12:01:04 -- accel/accel.sh@21 -- # val= 00:06:51.120 12:01:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # IFS=: 00:06:51.120 12:01:04 -- accel/accel.sh@20 -- # read -r var val 00:06:52.505 12:01:05 -- accel/accel.sh@21 -- # val= 00:06:52.505 12:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.505 12:01:05 -- accel/accel.sh@21 -- # val= 00:06:52.505 12:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.505 12:01:05 -- accel/accel.sh@21 -- # val= 00:06:52.505 12:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.505 12:01:05 -- accel/accel.sh@21 -- # val= 00:06:52.505 12:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.505 12:01:05 -- accel/accel.sh@21 -- # val= 00:06:52.505 12:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.505 12:01:05 -- accel/accel.sh@21 -- # val= 00:06:52.505 12:01:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.505 12:01:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.505 12:01:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.505 12:01:05 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:52.505 12:01:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.505 00:06:52.505 real 0m2.466s 00:06:52.505 user 0m2.279s 00:06:52.505 sys 0m0.194s 00:06:52.505 12:01:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.505 12:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.505 ************************************ 00:06:52.505 END TEST accel_comp 00:06:52.505 ************************************ 00:06:52.505 12:01:05 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.505 12:01:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:52.505 12:01:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.505 12:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.505 ************************************ 00:06:52.505 START TEST accel_decomp 00:06:52.505 ************************************ 00:06:52.505 12:01:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.505 12:01:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.505 12:01:05 -- accel/accel.sh@17 -- # local accel_module 00:06:52.505 12:01:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.505 12:01:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.505 12:01:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.505 12:01:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.505 12:01:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.505 12:01:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.505 12:01:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.505 12:01:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.505 12:01:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.505 12:01:05 -- accel/accel.sh@42 -- # jq -r . 00:06:52.505 [2024-06-11 12:01:05.175105] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:52.505 [2024-06-11 12:01:05.175161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282663 ] 00:06:52.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.505 [2024-06-11 12:01:05.232795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.505 [2024-06-11 12:01:05.260389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.448 12:01:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:53.448 00:06:53.448 SPDK Configuration: 00:06:53.448 Core mask: 0x1 00:06:53.448 00:06:53.448 Accel Perf Configuration: 00:06:53.448 Workload Type: decompress 00:06:53.448 Transfer size: 4096 bytes 00:06:53.448 Vector count 1 00:06:53.448 Module: software 00:06:53.448 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.448 Queue depth: 32 00:06:53.448 Allocate depth: 32 00:06:53.448 # threads/core: 1 00:06:53.448 Run time: 1 seconds 00:06:53.448 Verify: Yes 00:06:53.448 00:06:53.448 Running for 1 seconds... 00:06:53.448 00:06:53.448 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.448 ------------------------------------------------------------------------------------ 00:06:53.448 0,0 63296/s 116 MiB/s 0 0 00:06:53.448 ==================================================================================== 00:06:53.448 Total 63296/s 247 MiB/s 0 0' 00:06:53.448 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.448 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.448 12:01:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.448 12:01:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.448 12:01:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.448 12:01:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.448 12:01:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.448 12:01:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.448 12:01:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.448 12:01:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.448 12:01:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.448 12:01:06 -- accel/accel.sh@42 -- # jq -r . 00:06:53.448 [2024-06-11 12:01:06.401165] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.448 [2024-06-11 12:01:06.401240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282995 ] 00:06:53.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.448 [2024-06-11 12:01:06.461727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.709 [2024-06-11 12:01:06.489309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=0x1 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=decompress 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=software 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=32 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=32 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=1 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val=Yes 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:53.709 12:01:06 -- accel/accel.sh@21 -- # val= 00:06:53.709 12:01:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # IFS=: 00:06:53.709 12:01:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.653 12:01:07 -- accel/accel.sh@21 -- # val= 00:06:54.653 12:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:54.653 12:01:07 -- accel/accel.sh@21 -- # val= 00:06:54.653 12:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:54.653 12:01:07 -- accel/accel.sh@21 -- # val= 00:06:54.653 12:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:54.653 12:01:07 -- accel/accel.sh@21 -- # val= 00:06:54.653 12:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:54.653 12:01:07 -- accel/accel.sh@21 -- # val= 00:06:54.653 12:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:54.653 12:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:54.653 12:01:07 -- accel/accel.sh@21 -- # val= 00:06:54.654 12:01:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.654 12:01:07 -- accel/accel.sh@20 -- # IFS=: 00:06:54.654 12:01:07 -- accel/accel.sh@20 -- # read -r var val 00:06:54.654 12:01:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.654 12:01:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:54.654 12:01:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.654 00:06:54.654 real 0m2.446s 00:06:54.654 user 0m2.262s 00:06:54.654 sys 0m0.192s 00:06:54.654 12:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.654 12:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:54.654 ************************************ 00:06:54.654 END TEST accel_decomp 00:06:54.654 ************************************ 00:06:54.654 12:01:07 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.654 12:01:07 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:54.654 12:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.654 12:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:54.654 ************************************ 00:06:54.654 START TEST accel_decmop_full 00:06:54.654 ************************************ 00:06:54.654 12:01:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.654 12:01:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.654 12:01:07 -- accel/accel.sh@17 -- # local accel_module 00:06:54.654 12:01:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.654 12:01:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.654 12:01:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.654 12:01:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.654 12:01:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.654 12:01:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.654 12:01:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.654 12:01:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.654 12:01:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.654 12:01:07 -- accel/accel.sh@42 -- # jq -r . 00:06:54.654 [2024-06-11 12:01:07.679299] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:54.654 [2024-06-11 12:01:07.679379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283136 ] 00:06:54.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.915 [2024-06-11 12:01:07.741955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.915 [2024-06-11 12:01:07.771830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.301 12:01:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.301 00:06:56.301 SPDK Configuration: 00:06:56.302 Core mask: 0x1 00:06:56.302 00:06:56.302 Accel Perf Configuration: 00:06:56.302 Workload Type: decompress 00:06:56.302 Transfer size: 111250 bytes 00:06:56.302 Vector count 1 00:06:56.302 Module: software 00:06:56.302 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.302 Queue depth: 32 00:06:56.302 Allocate depth: 32 00:06:56.302 # threads/core: 1 00:06:56.302 Run time: 1 seconds 00:06:56.302 Verify: Yes 00:06:56.302 00:06:56.302 Running for 1 seconds... 00:06:56.302 00:06:56.302 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.302 ------------------------------------------------------------------------------------ 00:06:56.302 0,0 4096/s 169 MiB/s 0 0 00:06:56.302 ==================================================================================== 00:06:56.302 Total 4096/s 434 MiB/s 0 0' 00:06:56.302 12:01:08 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:08 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.302 12:01:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.302 12:01:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.302 12:01:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.302 12:01:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.302 12:01:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.302 12:01:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.302 12:01:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.302 12:01:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.302 12:01:08 -- accel/accel.sh@42 -- # jq -r . 00:06:56.302 [2024-06-11 12:01:08.929190] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:56.302 [2024-06-11 12:01:08.929289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283371 ] 00:06:56.302 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.302 [2024-06-11 12:01:08.995120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.302 [2024-06-11 12:01:09.023350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=0x1 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=decompress 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=software 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=32 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=32 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=1 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val=Yes 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:56.302 12:01:09 -- accel/accel.sh@21 -- # val= 00:06:56.302 12:01:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # IFS=: 00:06:56.302 12:01:09 -- accel/accel.sh@20 -- # read -r var val 00:06:57.324 12:01:10 -- accel/accel.sh@21 -- # val= 00:06:57.324 12:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.324 12:01:10 -- accel/accel.sh@21 -- # val= 00:06:57.324 12:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.324 12:01:10 -- accel/accel.sh@21 -- # val= 00:06:57.324 12:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.324 12:01:10 -- accel/accel.sh@21 -- # val= 00:06:57.324 12:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.324 12:01:10 -- accel/accel.sh@21 -- # val= 00:06:57.324 12:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.324 12:01:10 -- accel/accel.sh@21 -- # val= 00:06:57.324 12:01:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # IFS=: 00:06:57.324 12:01:10 -- accel/accel.sh@20 -- # read -r var val 00:06:57.324 12:01:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.324 12:01:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:57.324 12:01:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.324 00:06:57.324 real 0m2.505s 00:06:57.324 user 0m2.298s 00:06:57.324 sys 0m0.214s 00:06:57.324 12:01:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.324 12:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.324 ************************************ 00:06:57.324 END TEST accel_decmop_full 00:06:57.324 ************************************ 00:06:57.324 12:01:10 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.324 12:01:10 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:57.324 12:01:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.324 12:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.324 ************************************ 00:06:57.324 START TEST accel_decomp_mcore 00:06:57.324 ************************************ 00:06:57.324 12:01:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.324 12:01:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.324 12:01:10 -- accel/accel.sh@17 -- # local accel_module 00:06:57.324 12:01:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.324 12:01:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.324 12:01:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.324 12:01:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.324 12:01:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.324 12:01:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.324 12:01:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.324 12:01:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.324 12:01:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.324 12:01:10 -- accel/accel.sh@42 -- # jq -r . 00:06:57.324 [2024-06-11 12:01:10.223478] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:57.324 [2024-06-11 12:01:10.223552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283723 ] 00:06:57.324 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.324 [2024-06-11 12:01:10.285578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.324 [2024-06-11 12:01:10.316630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.324 [2024-06-11 12:01:10.316777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.324 [2024-06-11 12:01:10.316919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.324 [2024-06-11 12:01:10.316919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.710 12:01:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.710 00:06:58.710 SPDK Configuration: 00:06:58.710 Core mask: 0xf 00:06:58.710 00:06:58.710 Accel Perf Configuration: 00:06:58.710 Workload Type: decompress 00:06:58.710 Transfer size: 4096 bytes 00:06:58.710 Vector count 1 00:06:58.710 Module: software 00:06:58.710 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.710 Queue depth: 32 00:06:58.710 Allocate depth: 32 00:06:58.710 # threads/core: 1 00:06:58.710 Run time: 1 seconds 00:06:58.710 Verify: Yes 00:06:58.710 00:06:58.710 Running for 1 seconds... 00:06:58.710 00:06:58.710 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.710 ------------------------------------------------------------------------------------ 00:06:58.710 0,0 58464/s 107 MiB/s 0 0 00:06:58.710 3,0 58432/s 107 MiB/s 0 0 00:06:58.710 2,0 86176/s 158 MiB/s 0 0 00:06:58.710 1,0 58624/s 108 MiB/s 0 0 00:06:58.710 ==================================================================================== 00:06:58.710 Total 261696/s 1022 MiB/s 0 0' 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.710 12:01:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:58.710 12:01:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.710 12:01:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.710 12:01:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.710 12:01:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.710 12:01:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.710 12:01:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.710 12:01:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.710 12:01:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.710 [2024-06-11 12:01:11.462232] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:58.710 [2024-06-11 12:01:11.462311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284060 ] 00:06:58.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.710 [2024-06-11 12:01:11.524629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.710 [2024-06-11 12:01:11.554746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.710 [2024-06-11 12:01:11.554860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.710 [2024-06-11 12:01:11.555014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.710 [2024-06-11 12:01:11.555015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=0xf 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=decompress 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=software 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=32 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=32 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=1 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val=Yes 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.710 12:01:11 -- accel/accel.sh@21 -- # val= 00:06:58.710 12:01:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.710 12:01:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.650 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.650 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@21 -- # val= 00:06:59.651 12:01:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # IFS=: 00:06:59.651 12:01:12 -- accel/accel.sh@20 -- # read -r var val 00:06:59.651 12:01:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.651 12:01:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:59.651 12:01:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.651 00:06:59.651 real 0m2.485s 00:06:59.651 user 0m8.746s 00:06:59.651 sys 0m0.214s 00:06:59.651 12:01:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.651 12:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:59.651 ************************************ 00:06:59.651 END TEST accel_decomp_mcore 00:06:59.651 ************************************ 00:06:59.911 12:01:12 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.911 12:01:12 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:59.911 12:01:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.911 12:01:12 -- common/autotest_common.sh@10 -- # set +x 00:06:59.911 ************************************ 00:06:59.911 START TEST accel_decomp_full_mcore 00:06:59.911 ************************************ 00:06:59.911 12:01:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.911 12:01:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.911 12:01:12 -- accel/accel.sh@17 -- # local accel_module 00:06:59.911 12:01:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.911 12:01:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.911 12:01:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.911 12:01:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.911 12:01:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.911 12:01:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.911 12:01:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.911 12:01:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.911 12:01:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.911 12:01:12 -- accel/accel.sh@42 -- # jq -r . 00:06:59.911 [2024-06-11 12:01:12.751264] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:59.911 [2024-06-11 12:01:12.751354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284195 ] 00:06:59.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.911 [2024-06-11 12:01:12.814373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.911 [2024-06-11 12:01:12.846919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.911 [2024-06-11 12:01:12.847040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.911 [2024-06-11 12:01:12.847141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.911 [2024-06-11 12:01:12.847142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.294 12:01:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:01.294 00:07:01.294 SPDK Configuration: 00:07:01.294 Core mask: 0xf 00:07:01.294 00:07:01.294 Accel Perf Configuration: 00:07:01.294 Workload Type: decompress 00:07:01.294 Transfer size: 111250 bytes 00:07:01.294 Vector count 1 00:07:01.294 Module: software 00:07:01.294 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.294 Queue depth: 32 00:07:01.294 Allocate depth: 32 00:07:01.294 # threads/core: 1 00:07:01.294 Run time: 1 seconds 00:07:01.294 Verify: Yes 00:07:01.294 00:07:01.294 Running for 1 seconds... 00:07:01.294 00:07:01.294 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.294 ------------------------------------------------------------------------------------ 00:07:01.294 0,0 4096/s 169 MiB/s 0 0 00:07:01.294 3,0 4096/s 169 MiB/s 0 0 00:07:01.294 2,0 5952/s 245 MiB/s 0 0 00:07:01.294 1,0 4096/s 169 MiB/s 0 0 00:07:01.294 ==================================================================================== 00:07:01.294 Total 18240/s 1935 MiB/s 0 0' 00:07:01.294 12:01:13 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:13 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.294 12:01:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.294 12:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.294 12:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.294 12:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.294 12:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.294 12:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.294 12:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.294 12:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.294 12:01:13 -- accel/accel.sh@42 -- # jq -r . 00:07:01.294 [2024-06-11 12:01:14.007940] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:01.294 [2024-06-11 12:01:14.008010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284445 ] 00:07:01.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.294 [2024-06-11 12:01:14.069351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.294 [2024-06-11 12:01:14.099486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.294 [2024-06-11 12:01:14.099604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.294 [2024-06-11 12:01:14.099759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.294 [2024-06-11 12:01:14.099760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=0xf 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=decompress 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=software 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=32 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=32 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=1 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val=Yes 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.294 12:01:14 -- accel/accel.sh@21 -- # val= 00:07:01.294 12:01:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.294 12:01:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@21 -- # val= 00:07:02.235 12:01:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 12:01:15 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 12:01:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.235 12:01:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:02.235 12:01:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.235 00:07:02.235 real 0m2.513s 00:07:02.235 user 0m8.835s 00:07:02.236 sys 0m0.222s 00:07:02.236 12:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.236 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:07:02.236 ************************************ 00:07:02.236 END TEST accel_decomp_full_mcore 00:07:02.236 ************************************ 00:07:02.236 12:01:15 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.236 12:01:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:02.236 12:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.236 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:07:02.496 ************************************ 00:07:02.496 START TEST accel_decomp_mthread 00:07:02.496 ************************************ 00:07:02.496 12:01:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.496 12:01:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.496 12:01:15 -- accel/accel.sh@17 -- # local accel_module 00:07:02.496 12:01:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.496 12:01:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.496 12:01:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.496 12:01:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.496 12:01:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.497 12:01:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.497 12:01:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.497 12:01:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.497 12:01:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.497 12:01:15 -- accel/accel.sh@42 -- # jq -r . 00:07:02.497 [2024-06-11 12:01:15.300184] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:02.497 [2024-06-11 12:01:15.300261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284797 ] 00:07:02.497 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.497 [2024-06-11 12:01:15.362096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.497 [2024-06-11 12:01:15.392293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.879 12:01:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.879 00:07:03.879 SPDK Configuration: 00:07:03.879 Core mask: 0x1 00:07:03.879 00:07:03.879 Accel Perf Configuration: 00:07:03.879 Workload Type: decompress 00:07:03.879 Transfer size: 4096 bytes 00:07:03.879 Vector count 1 00:07:03.879 Module: software 00:07:03.879 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.879 Queue depth: 32 00:07:03.879 Allocate depth: 32 00:07:03.879 # threads/core: 2 00:07:03.879 Run time: 1 seconds 00:07:03.879 Verify: Yes 00:07:03.879 00:07:03.879 Running for 1 seconds... 00:07:03.879 00:07:03.879 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.879 ------------------------------------------------------------------------------------ 00:07:03.879 0,1 31968/s 58 MiB/s 0 0 00:07:03.879 0,0 31872/s 58 MiB/s 0 0 00:07:03.879 ==================================================================================== 00:07:03.879 Total 63840/s 249 MiB/s 0 0' 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.879 12:01:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.879 12:01:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.879 12:01:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.879 12:01:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.879 12:01:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.879 12:01:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.879 12:01:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.879 12:01:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.879 12:01:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.879 12:01:16 -- accel/accel.sh@42 -- # jq -r . 00:07:03.879 [2024-06-11 12:01:16.536875] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:03.879 [2024-06-11 12:01:16.536952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285125 ] 00:07:03.879 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.879 [2024-06-11 12:01:16.597817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.879 [2024-06-11 12:01:16.626457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.879 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.879 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.879 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.879 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.879 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.879 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.879 12:01:16 -- accel/accel.sh@21 -- # val=0x1 00:07:03.879 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.879 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.879 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.879 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.879 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.879 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.879 12:01:16 -- accel/accel.sh@21 -- # val=decompress 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val=software 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val=32 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val=32 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val=2 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val=Yes 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.880 12:01:16 -- accel/accel.sh@21 -- # val= 00:07:03.880 12:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.880 12:01:16 -- accel/accel.sh@20 -- # read -r var val 00:07:04.820 12:01:17 -- accel/accel.sh@21 -- # val= 00:07:04.820 12:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.820 12:01:17 -- accel/accel.sh@21 -- # val= 00:07:04.820 12:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.820 12:01:17 -- accel/accel.sh@21 -- # val= 00:07:04.820 12:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.820 12:01:17 -- accel/accel.sh@21 -- # val= 00:07:04.820 12:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.820 12:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.820 12:01:17 -- accel/accel.sh@21 -- # val= 00:07:04.820 12:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.821 12:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.821 12:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.821 12:01:17 -- accel/accel.sh@21 -- # val= 00:07:04.821 12:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.821 12:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.821 12:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.821 12:01:17 -- accel/accel.sh@21 -- # val= 00:07:04.821 12:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.821 12:01:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.821 12:01:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.821 12:01:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.821 12:01:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:04.821 12:01:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.821 00:07:04.821 real 0m2.476s 00:07:04.821 user 0m2.278s 00:07:04.821 sys 0m0.204s 00:07:04.821 12:01:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.821 12:01:17 -- common/autotest_common.sh@10 -- # set +x 00:07:04.821 ************************************ 00:07:04.821 END TEST accel_decomp_mthread 00:07:04.821 ************************************ 00:07:04.821 12:01:17 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.821 12:01:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:04.821 12:01:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.821 12:01:17 -- common/autotest_common.sh@10 -- # set +x 00:07:04.821 ************************************ 00:07:04.821 START TEST accel_deomp_full_mthread 00:07:04.821 ************************************ 00:07:04.821 12:01:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.821 12:01:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.821 12:01:17 -- accel/accel.sh@17 -- # local accel_module 00:07:04.821 12:01:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.821 12:01:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.821 12:01:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.821 12:01:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.821 12:01:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.821 12:01:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.821 12:01:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.821 12:01:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.821 12:01:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.821 12:01:17 -- accel/accel.sh@42 -- # jq -r . 00:07:04.821 [2024-06-11 12:01:17.801701] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:04.821 [2024-06-11 12:01:17.801747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285258 ] 00:07:04.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.821 [2024-06-11 12:01:17.852728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.081 [2024-06-11 12:01:17.880418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.021 12:01:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.021 00:07:06.021 SPDK Configuration: 00:07:06.021 Core mask: 0x1 00:07:06.021 00:07:06.021 Accel Perf Configuration: 00:07:06.021 Workload Type: decompress 00:07:06.021 Transfer size: 111250 bytes 00:07:06.021 Vector count 1 00:07:06.021 Module: software 00:07:06.021 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.021 Queue depth: 32 00:07:06.021 Allocate depth: 32 00:07:06.021 # threads/core: 2 00:07:06.021 Run time: 1 seconds 00:07:06.021 Verify: Yes 00:07:06.021 00:07:06.021 Running for 1 seconds... 00:07:06.021 00:07:06.021 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.021 ------------------------------------------------------------------------------------ 00:07:06.021 0,1 2080/s 85 MiB/s 0 0 00:07:06.021 0,0 2080/s 85 MiB/s 0 0 00:07:06.021 ==================================================================================== 00:07:06.021 Total 4160/s 441 MiB/s 0 0' 00:07:06.021 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.021 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.021 12:01:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.021 12:01:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.021 12:01:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.021 12:01:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.021 12:01:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.021 12:01:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.021 12:01:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.021 12:01:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.021 12:01:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.021 12:01:19 -- accel/accel.sh@42 -- # jq -r . 00:07:06.021 [2024-06-11 12:01:19.052324] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:06.021 [2024-06-11 12:01:19.052404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285501 ] 00:07:06.281 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.281 [2024-06-11 12:01:19.114421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.281 [2024-06-11 12:01:19.141929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val=0x1 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val=decompress 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.281 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.281 12:01:19 -- accel/accel.sh@21 -- # val=software 00:07:06.281 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.281 12:01:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val=32 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val=32 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val=2 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val=Yes 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:06.282 12:01:19 -- accel/accel.sh@21 -- # val= 00:07:06.282 12:01:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # IFS=: 00:07:06.282 12:01:19 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@21 -- # val= 00:07:07.663 12:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@21 -- # val= 00:07:07.663 12:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@21 -- # val= 00:07:07.663 12:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@21 -- # val= 00:07:07.663 12:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@21 -- # val= 00:07:07.663 12:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@21 -- # val= 00:07:07.663 12:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@21 -- # val= 00:07:07.663 12:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.663 12:01:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.663 12:01:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.663 12:01:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:07.663 12:01:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.663 00:07:07.663 real 0m2.504s 00:07:07.663 user 0m2.329s 00:07:07.663 sys 0m0.183s 00:07:07.663 12:01:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.663 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.663 ************************************ 00:07:07.663 END TEST accel_deomp_full_mthread 00:07:07.663 ************************************ 00:07:07.663 12:01:20 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:07.663 12:01:20 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.663 12:01:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:07.663 12:01:20 -- accel/accel.sh@129 -- # build_accel_config 00:07:07.663 12:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.663 12:01:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.663 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.663 12:01:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.663 12:01:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.663 12:01:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.663 12:01:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.663 12:01:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.663 12:01:20 -- accel/accel.sh@42 -- # jq -r . 00:07:07.663 ************************************ 00:07:07.663 START TEST accel_dif_functional_tests 00:07:07.663 ************************************ 00:07:07.663 12:01:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.663 [2024-06-11 12:01:20.359946] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:07.664 [2024-06-11 12:01:20.360001] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285856 ] 00:07:07.664 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.664 [2024-06-11 12:01:20.418876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.664 [2024-06-11 12:01:20.448342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.664 [2024-06-11 12:01:20.448487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.664 [2024-06-11 12:01:20.448489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.664 00:07:07.664 00:07:07.664 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.664 http://cunit.sourceforge.net/ 00:07:07.664 00:07:07.664 00:07:07.664 Suite: accel_dif 00:07:07.664 Test: verify: DIF generated, GUARD check ...passed 00:07:07.664 Test: verify: DIF generated, APPTAG check ...passed 00:07:07.664 Test: verify: DIF generated, REFTAG check ...passed 00:07:07.664 Test: verify: DIF not generated, GUARD check ...[2024-06-11 12:01:20.497298] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.664 [2024-06-11 12:01:20.497336] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:07.664 passed 00:07:07.664 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 12:01:20.497367] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.664 [2024-06-11 12:01:20.497382] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:07.664 passed 00:07:07.664 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 12:01:20.497398] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.664 [2024-06-11 12:01:20.497412] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:07.664 passed 00:07:07.664 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:07.664 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 12:01:20.497452] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:07.664 passed 00:07:07.664 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:07.664 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:07.664 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:07.664 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 12:01:20.497563] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:07.664 passed 00:07:07.664 Test: generate copy: DIF generated, GUARD check ...passed 00:07:07.664 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:07.664 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:07.664 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:07.664 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:07.664 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:07.664 Test: generate copy: iovecs-len validate ...[2024-06-11 12:01:20.497750] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:07.664 passed 00:07:07.664 Test: generate copy: buffer alignment validate ...passed 00:07:07.664 00:07:07.664 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.664 suites 1 1 n/a 0 0 00:07:07.664 tests 20 20 20 0 0 00:07:07.664 asserts 204 204 204 0 n/a 00:07:07.664 00:07:07.664 Elapsed time = 0.000 seconds 00:07:07.664 00:07:07.664 real 0m0.265s 00:07:07.664 user 0m0.390s 00:07:07.664 sys 0m0.123s 00:07:07.664 12:01:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.664 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.664 ************************************ 00:07:07.664 END TEST accel_dif_functional_tests 00:07:07.664 ************************************ 00:07:07.664 00:07:07.664 real 0m52.518s 00:07:07.664 user 1m1.056s 00:07:07.664 sys 0m5.620s 00:07:07.664 12:01:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.664 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.664 ************************************ 00:07:07.664 END TEST accel 00:07:07.664 ************************************ 00:07:07.664 12:01:20 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:07.664 12:01:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.664 12:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.664 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.664 ************************************ 00:07:07.664 START TEST accel_rpc 00:07:07.664 ************************************ 00:07:07.664 12:01:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:07.924 * Looking for test storage... 00:07:07.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:07.924 12:01:20 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:07.924 12:01:20 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1285924 00:07:07.924 12:01:20 -- accel/accel_rpc.sh@15 -- # waitforlisten 1285924 00:07:07.924 12:01:20 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:07.924 12:01:20 -- common/autotest_common.sh@819 -- # '[' -z 1285924 ']' 00:07:07.924 12:01:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.924 12:01:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:07.924 12:01:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.924 12:01:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:07.924 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.924 [2024-06-11 12:01:20.821721] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:07.924 [2024-06-11 12:01:20.821804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285924 ] 00:07:07.924 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.924 [2024-06-11 12:01:20.887452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.924 [2024-06-11 12:01:20.925552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.924 [2024-06-11 12:01:20.925717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.864 12:01:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:08.864 12:01:21 -- common/autotest_common.sh@852 -- # return 0 00:07:08.864 12:01:21 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:08.864 12:01:21 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:08.864 12:01:21 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:08.864 12:01:21 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:08.864 12:01:21 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:08.864 12:01:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.864 12:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.864 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.864 ************************************ 00:07:08.864 START TEST accel_assign_opcode 00:07:08.864 ************************************ 00:07:08.864 12:01:21 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:08.864 12:01:21 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:08.864 12:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.864 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.864 [2024-06-11 12:01:21.583641] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:08.864 12:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.864 12:01:21 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:08.864 12:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.864 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.865 [2024-06-11 12:01:21.595670] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:08.865 12:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.865 12:01:21 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:08.865 12:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.865 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.865 12:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.865 12:01:21 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:08.865 12:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.865 12:01:21 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:08.865 12:01:21 -- accel/accel_rpc.sh@42 -- # grep software 00:07:08.865 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.865 12:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.865 software 00:07:08.865 00:07:08.865 real 0m0.195s 00:07:08.865 user 0m0.050s 00:07:08.865 sys 0m0.010s 00:07:08.865 12:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.865 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.865 ************************************ 00:07:08.865 END TEST accel_assign_opcode 00:07:08.865 ************************************ 00:07:08.865 12:01:21 -- accel/accel_rpc.sh@55 -- # killprocess 1285924 00:07:08.865 12:01:21 -- common/autotest_common.sh@926 -- # '[' -z 1285924 ']' 00:07:08.865 12:01:21 -- common/autotest_common.sh@930 -- # kill -0 1285924 00:07:08.865 12:01:21 -- common/autotest_common.sh@931 -- # uname 00:07:08.865 12:01:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:08.865 12:01:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1285924 00:07:08.865 12:01:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:08.865 12:01:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:08.865 12:01:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1285924' 00:07:08.865 killing process with pid 1285924 00:07:08.865 12:01:21 -- common/autotest_common.sh@945 -- # kill 1285924 00:07:08.865 12:01:21 -- common/autotest_common.sh@950 -- # wait 1285924 00:07:09.124 00:07:09.124 real 0m1.388s 00:07:09.124 user 0m1.446s 00:07:09.124 sys 0m0.397s 00:07:09.124 12:01:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.124 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.124 ************************************ 00:07:09.124 END TEST accel_rpc 00:07:09.124 ************************************ 00:07:09.124 12:01:22 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:09.124 12:01:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.124 12:01:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.124 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.124 ************************************ 00:07:09.124 START TEST app_cmdline 00:07:09.124 ************************************ 00:07:09.124 12:01:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:09.384 * Looking for test storage... 00:07:09.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:09.384 12:01:22 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:09.384 12:01:22 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1286325 00:07:09.384 12:01:22 -- app/cmdline.sh@18 -- # waitforlisten 1286325 00:07:09.384 12:01:22 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:09.384 12:01:22 -- common/autotest_common.sh@819 -- # '[' -z 1286325 ']' 00:07:09.384 12:01:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.384 12:01:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:09.384 12:01:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.384 12:01:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:09.384 12:01:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.384 [2024-06-11 12:01:22.264058] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:09.384 [2024-06-11 12:01:22.264156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286325 ] 00:07:09.384 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.384 [2024-06-11 12:01:22.332173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.384 [2024-06-11 12:01:22.368722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.384 [2024-06-11 12:01:22.368873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.321 12:01:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:10.321 12:01:23 -- common/autotest_common.sh@852 -- # return 0 00:07:10.321 12:01:23 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:10.321 { 00:07:10.321 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:10.321 "fields": { 00:07:10.321 "major": 24, 00:07:10.321 "minor": 1, 00:07:10.321 "patch": 1, 00:07:10.321 "suffix": "-pre", 00:07:10.321 "commit": "130b9406a" 00:07:10.321 } 00:07:10.321 } 00:07:10.321 12:01:23 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:10.321 12:01:23 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:10.321 12:01:23 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:10.321 12:01:23 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:10.321 12:01:23 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:10.321 12:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:10.321 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:07:10.321 12:01:23 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:10.321 12:01:23 -- app/cmdline.sh@26 -- # sort 00:07:10.321 12:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:10.321 12:01:23 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:10.321 12:01:23 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:10.321 12:01:23 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.321 12:01:23 -- common/autotest_common.sh@640 -- # local es=0 00:07:10.321 12:01:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.321 12:01:23 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.321 12:01:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:10.321 12:01:23 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.321 12:01:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:10.321 12:01:23 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.321 12:01:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:10.321 12:01:23 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.321 12:01:23 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:10.321 12:01:23 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.321 request: 00:07:10.321 { 00:07:10.321 "method": "env_dpdk_get_mem_stats", 00:07:10.321 "req_id": 1 00:07:10.321 } 00:07:10.321 Got JSON-RPC error response 00:07:10.321 response: 00:07:10.321 { 00:07:10.321 "code": -32601, 00:07:10.321 "message": "Method not found" 00:07:10.321 } 00:07:10.321 12:01:23 -- common/autotest_common.sh@643 -- # es=1 00:07:10.321 12:01:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:10.321 12:01:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:10.321 12:01:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:10.321 12:01:23 -- app/cmdline.sh@1 -- # killprocess 1286325 00:07:10.321 12:01:23 -- common/autotest_common.sh@926 -- # '[' -z 1286325 ']' 00:07:10.321 12:01:23 -- common/autotest_common.sh@930 -- # kill -0 1286325 00:07:10.321 12:01:23 -- common/autotest_common.sh@931 -- # uname 00:07:10.580 12:01:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:10.580 12:01:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1286325 00:07:10.581 12:01:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:10.581 12:01:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:10.581 12:01:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1286325' 00:07:10.581 killing process with pid 1286325 00:07:10.581 12:01:23 -- common/autotest_common.sh@945 -- # kill 1286325 00:07:10.581 12:01:23 -- common/autotest_common.sh@950 -- # wait 1286325 00:07:10.581 00:07:10.581 real 0m1.494s 00:07:10.581 user 0m1.770s 00:07:10.581 sys 0m0.400s 00:07:10.581 12:01:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.581 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:07:10.581 ************************************ 00:07:10.581 END TEST app_cmdline 00:07:10.581 ************************************ 00:07:10.840 12:01:23 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.840 12:01:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.840 12:01:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.840 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:07:10.840 ************************************ 00:07:10.840 START TEST version 00:07:10.840 ************************************ 00:07:10.840 12:01:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.840 * Looking for test storage... 00:07:10.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.840 12:01:23 -- app/version.sh@17 -- # get_header_version major 00:07:10.840 12:01:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.840 12:01:23 -- app/version.sh@14 -- # cut -f2 00:07:10.840 12:01:23 -- app/version.sh@14 -- # tr -d '"' 00:07:10.840 12:01:23 -- app/version.sh@17 -- # major=24 00:07:10.840 12:01:23 -- app/version.sh@18 -- # get_header_version minor 00:07:10.840 12:01:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.840 12:01:23 -- app/version.sh@14 -- # cut -f2 00:07:10.840 12:01:23 -- app/version.sh@14 -- # tr -d '"' 00:07:10.840 12:01:23 -- app/version.sh@18 -- # minor=1 00:07:10.840 12:01:23 -- app/version.sh@19 -- # get_header_version patch 00:07:10.840 12:01:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.840 12:01:23 -- app/version.sh@14 -- # cut -f2 00:07:10.840 12:01:23 -- app/version.sh@14 -- # tr -d '"' 00:07:10.840 12:01:23 -- app/version.sh@19 -- # patch=1 00:07:10.840 12:01:23 -- app/version.sh@20 -- # get_header_version suffix 00:07:10.840 12:01:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.840 12:01:23 -- app/version.sh@14 -- # cut -f2 00:07:10.840 12:01:23 -- app/version.sh@14 -- # tr -d '"' 00:07:10.840 12:01:23 -- app/version.sh@20 -- # suffix=-pre 00:07:10.840 12:01:23 -- app/version.sh@22 -- # version=24.1 00:07:10.840 12:01:23 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.840 12:01:23 -- app/version.sh@25 -- # version=24.1.1 00:07:10.840 12:01:23 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:10.840 12:01:23 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:10.840 12:01:23 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.840 12:01:23 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:10.840 12:01:23 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:10.840 00:07:10.840 real 0m0.171s 00:07:10.840 user 0m0.085s 00:07:10.840 sys 0m0.126s 00:07:10.840 12:01:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.840 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:07:10.840 ************************************ 00:07:10.840 END TEST version 00:07:10.840 ************************************ 00:07:10.840 12:01:23 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:10.840 12:01:23 -- spdk/autotest.sh@204 -- # uname -s 00:07:10.840 12:01:23 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:10.840 12:01:23 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:10.840 12:01:23 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:10.840 12:01:23 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:10.840 12:01:23 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:10.840 12:01:23 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:10.840 12:01:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:10.840 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 12:01:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:11.101 12:01:23 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:11.101 12:01:23 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:11.101 12:01:23 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:11.101 12:01:23 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:11.101 12:01:23 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:11.101 12:01:23 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:11.101 12:01:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:11.101 12:01:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.101 12:01:23 -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 ************************************ 00:07:11.101 START TEST nvmf_tcp 00:07:11.101 ************************************ 00:07:11.101 12:01:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:11.101 * Looking for test storage... 00:07:11.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:11.101 12:01:23 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:11.101 12:01:24 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:11.101 12:01:24 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.101 12:01:24 -- nvmf/common.sh@7 -- # uname -s 00:07:11.101 12:01:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.101 12:01:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.101 12:01:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.101 12:01:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.101 12:01:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.101 12:01:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.101 12:01:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.101 12:01:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.101 12:01:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.101 12:01:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.101 12:01:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:11.101 12:01:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:11.101 12:01:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.101 12:01:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.101 12:01:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.101 12:01:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.101 12:01:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.101 12:01:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.101 12:01:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.101 12:01:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.101 12:01:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.101 12:01:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.101 12:01:24 -- paths/export.sh@5 -- # export PATH 00:07:11.101 12:01:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.101 12:01:24 -- nvmf/common.sh@46 -- # : 0 00:07:11.101 12:01:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:11.101 12:01:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:11.101 12:01:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:11.101 12:01:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.101 12:01:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.101 12:01:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:11.101 12:01:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:11.101 12:01:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:11.101 12:01:24 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:11.101 12:01:24 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:11.101 12:01:24 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:11.101 12:01:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:11.101 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 12:01:24 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:11.101 12:01:24 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:11.101 12:01:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:11.101 12:01:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.101 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.101 ************************************ 00:07:11.101 START TEST nvmf_example 00:07:11.101 ************************************ 00:07:11.101 12:01:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:11.101 * Looking for test storage... 00:07:11.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.362 12:01:24 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.362 12:01:24 -- nvmf/common.sh@7 -- # uname -s 00:07:11.362 12:01:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.362 12:01:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.362 12:01:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.362 12:01:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.362 12:01:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.362 12:01:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.362 12:01:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.362 12:01:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.362 12:01:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.362 12:01:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.362 12:01:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:11.362 12:01:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:11.362 12:01:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.362 12:01:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.362 12:01:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.362 12:01:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.362 12:01:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.362 12:01:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.362 12:01:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.362 12:01:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.362 12:01:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.362 12:01:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.362 12:01:24 -- paths/export.sh@5 -- # export PATH 00:07:11.362 12:01:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.362 12:01:24 -- nvmf/common.sh@46 -- # : 0 00:07:11.362 12:01:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:11.362 12:01:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:11.362 12:01:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:11.362 12:01:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.362 12:01:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.362 12:01:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:11.362 12:01:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:11.362 12:01:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:11.362 12:01:24 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:11.362 12:01:24 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:11.362 12:01:24 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:11.362 12:01:24 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:11.362 12:01:24 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:11.362 12:01:24 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:11.362 12:01:24 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:11.362 12:01:24 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:11.362 12:01:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:11.362 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:11.362 12:01:24 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:11.362 12:01:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:11.362 12:01:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.362 12:01:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:11.362 12:01:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:11.362 12:01:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:11.362 12:01:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.362 12:01:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.362 12:01:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.362 12:01:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:11.362 12:01:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:11.362 12:01:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:11.362 12:01:24 -- common/autotest_common.sh@10 -- # set +x 00:07:19.501 12:01:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:19.501 12:01:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:19.501 12:01:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:19.501 12:01:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:19.501 12:01:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:19.501 12:01:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:19.501 12:01:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:19.501 12:01:31 -- nvmf/common.sh@294 -- # net_devs=() 00:07:19.501 12:01:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:19.501 12:01:31 -- nvmf/common.sh@295 -- # e810=() 00:07:19.501 12:01:31 -- nvmf/common.sh@295 -- # local -ga e810 00:07:19.501 12:01:31 -- nvmf/common.sh@296 -- # x722=() 00:07:19.501 12:01:31 -- nvmf/common.sh@296 -- # local -ga x722 00:07:19.501 12:01:31 -- nvmf/common.sh@297 -- # mlx=() 00:07:19.501 12:01:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:19.501 12:01:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.501 12:01:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:19.501 12:01:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:19.501 12:01:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:19.501 12:01:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:19.501 12:01:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:19.501 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:19.501 12:01:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:19.501 12:01:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:19.501 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:19.501 12:01:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:19.501 12:01:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:19.501 12:01:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.501 12:01:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:19.501 12:01:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.501 12:01:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:19.501 Found net devices under 0000:31:00.0: cvl_0_0 00:07:19.501 12:01:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.501 12:01:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:19.501 12:01:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.501 12:01:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:19.501 12:01:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.501 12:01:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:19.501 Found net devices under 0000:31:00.1: cvl_0_1 00:07:19.501 12:01:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.501 12:01:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:19.501 12:01:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:19.501 12:01:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:19.501 12:01:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:19.501 12:01:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.502 12:01:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.502 12:01:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.502 12:01:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:19.502 12:01:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.502 12:01:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.502 12:01:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:19.502 12:01:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.502 12:01:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.502 12:01:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:19.502 12:01:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:19.502 12:01:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.502 12:01:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.502 12:01:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.502 12:01:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.502 12:01:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:19.502 12:01:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.502 12:01:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.502 12:01:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.502 12:01:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:19.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:07:19.502 00:07:19.502 --- 10.0.0.2 ping statistics --- 00:07:19.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.502 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:07:19.502 12:01:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:07:19.502 00:07:19.502 --- 10.0.0.1 ping statistics --- 00:07:19.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.502 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:19.502 12:01:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.502 12:01:31 -- nvmf/common.sh@410 -- # return 0 00:07:19.502 12:01:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:19.502 12:01:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.502 12:01:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:19.502 12:01:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:19.502 12:01:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.502 12:01:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:19.502 12:01:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:19.502 12:01:31 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:19.502 12:01:31 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:19.502 12:01:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:19.502 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 12:01:31 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:19.502 12:01:31 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:19.502 12:01:31 -- target/nvmf_example.sh@34 -- # nvmfpid=1290525 00:07:19.502 12:01:31 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:19.502 12:01:31 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:19.502 12:01:31 -- target/nvmf_example.sh@36 -- # waitforlisten 1290525 00:07:19.502 12:01:31 -- common/autotest_common.sh@819 -- # '[' -z 1290525 ']' 00:07:19.502 12:01:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.502 12:01:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:19.502 12:01:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.502 12:01:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:19.502 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.502 12:01:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.502 12:01:32 -- common/autotest_common.sh@852 -- # return 0 00:07:19.502 12:01:32 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:19.502 12:01:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:19.502 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 12:01:32 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.502 12:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.502 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 12:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.502 12:01:32 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:19.502 12:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.502 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 12:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.502 12:01:32 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:19.502 12:01:32 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:19.502 12:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.502 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 12:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.502 12:01:32 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:19.502 12:01:32 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:19.502 12:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.502 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 12:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.502 12:01:32 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.502 12:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.502 12:01:32 -- common/autotest_common.sh@10 -- # set +x 00:07:19.502 12:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.502 12:01:32 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:19.502 12:01:32 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:19.502 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.736 Initializing NVMe Controllers 00:07:31.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:31.736 Initialization complete. Launching workers. 00:07:31.736 ======================================================== 00:07:31.736 Latency(us) 00:07:31.736 Device Information : IOPS MiB/s Average min max 00:07:31.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19162.00 74.85 3339.61 849.28 16167.68 00:07:31.736 ======================================================== 00:07:31.736 Total : 19162.00 74.85 3339.61 849.28 16167.68 00:07:31.736 00:07:31.737 12:01:42 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:31.737 12:01:42 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:31.737 12:01:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:31.737 12:01:42 -- nvmf/common.sh@116 -- # sync 00:07:31.737 12:01:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:31.737 12:01:42 -- nvmf/common.sh@119 -- # set +e 00:07:31.737 12:01:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:31.737 12:01:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:31.737 rmmod nvme_tcp 00:07:31.737 rmmod nvme_fabrics 00:07:31.737 rmmod nvme_keyring 00:07:31.737 12:01:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:31.737 12:01:42 -- nvmf/common.sh@123 -- # set -e 00:07:31.737 12:01:42 -- nvmf/common.sh@124 -- # return 0 00:07:31.737 12:01:42 -- nvmf/common.sh@477 -- # '[' -n 1290525 ']' 00:07:31.737 12:01:42 -- nvmf/common.sh@478 -- # killprocess 1290525 00:07:31.737 12:01:42 -- common/autotest_common.sh@926 -- # '[' -z 1290525 ']' 00:07:31.737 12:01:42 -- common/autotest_common.sh@930 -- # kill -0 1290525 00:07:31.737 12:01:42 -- common/autotest_common.sh@931 -- # uname 00:07:31.737 12:01:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:31.737 12:01:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1290525 00:07:31.737 12:01:42 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:31.737 12:01:42 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:31.737 12:01:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1290525' 00:07:31.737 killing process with pid 1290525 00:07:31.737 12:01:42 -- common/autotest_common.sh@945 -- # kill 1290525 00:07:31.737 12:01:42 -- common/autotest_common.sh@950 -- # wait 1290525 00:07:31.737 nvmf threads initialize successfully 00:07:31.737 bdev subsystem init successfully 00:07:31.737 created a nvmf target service 00:07:31.737 create targets's poll groups done 00:07:31.737 all subsystems of target started 00:07:31.737 nvmf target is running 00:07:31.737 all subsystems of target stopped 00:07:31.737 destroy targets's poll groups done 00:07:31.737 destroyed the nvmf target service 00:07:31.737 bdev subsystem finish successfully 00:07:31.737 nvmf threads destroy successfully 00:07:31.737 12:01:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:31.737 12:01:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:31.737 12:01:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:31.737 12:01:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.737 12:01:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:31.737 12:01:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.737 12:01:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.737 12:01:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.997 12:01:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:31.997 12:01:44 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:31.997 12:01:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:31.997 12:01:44 -- common/autotest_common.sh@10 -- # set +x 00:07:31.997 00:07:31.997 real 0m20.944s 00:07:31.997 user 0m46.618s 00:07:31.997 sys 0m6.426s 00:07:31.997 12:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.997 12:01:44 -- common/autotest_common.sh@10 -- # set +x 00:07:31.997 ************************************ 00:07:31.997 END TEST nvmf_example 00:07:31.997 ************************************ 00:07:31.997 12:01:45 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:31.997 12:01:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:31.997 12:01:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.997 12:01:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.261 ************************************ 00:07:32.261 START TEST nvmf_filesystem 00:07:32.261 ************************************ 00:07:32.261 12:01:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:32.261 * Looking for test storage... 00:07:32.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.261 12:01:45 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:32.261 12:01:45 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:32.261 12:01:45 -- common/autotest_common.sh@34 -- # set -e 00:07:32.261 12:01:45 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:32.261 12:01:45 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:32.261 12:01:45 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:32.261 12:01:45 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:32.261 12:01:45 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:32.261 12:01:45 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:32.261 12:01:45 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:32.261 12:01:45 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:32.261 12:01:45 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:32.261 12:01:45 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:32.261 12:01:45 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:32.261 12:01:45 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:32.261 12:01:45 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:32.261 12:01:45 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:32.261 12:01:45 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:32.261 12:01:45 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:32.261 12:01:45 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:32.261 12:01:45 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:32.261 12:01:45 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:32.261 12:01:45 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:32.261 12:01:45 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:32.261 12:01:45 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:32.262 12:01:45 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:32.262 12:01:45 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:32.262 12:01:45 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:32.262 12:01:45 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:32.262 12:01:45 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:32.262 12:01:45 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:32.262 12:01:45 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:32.262 12:01:45 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:32.262 12:01:45 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:32.262 12:01:45 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:32.262 12:01:45 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:32.262 12:01:45 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:32.262 12:01:45 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:32.262 12:01:45 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:32.262 12:01:45 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:32.262 12:01:45 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:32.262 12:01:45 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:32.262 12:01:45 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:32.262 12:01:45 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:32.262 12:01:45 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:32.262 12:01:45 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:32.262 12:01:45 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:32.262 12:01:45 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:32.262 12:01:45 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:32.262 12:01:45 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:32.262 12:01:45 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:32.262 12:01:45 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:32.262 12:01:45 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:32.262 12:01:45 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:32.262 12:01:45 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:32.262 12:01:45 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:32.262 12:01:45 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:32.262 12:01:45 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:32.262 12:01:45 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:32.262 12:01:45 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:32.262 12:01:45 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:32.262 12:01:45 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:32.262 12:01:45 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:32.262 12:01:45 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:32.262 12:01:45 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:32.262 12:01:45 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:32.262 12:01:45 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:32.262 12:01:45 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.262 12:01:45 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:32.262 12:01:45 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:32.262 12:01:45 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:32.262 12:01:45 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:32.262 12:01:45 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:32.262 12:01:45 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:32.262 12:01:45 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:32.262 12:01:45 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:32.262 12:01:45 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:32.262 12:01:45 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:32.262 12:01:45 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:32.262 12:01:45 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:32.262 12:01:45 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:32.262 12:01:45 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:32.262 12:01:45 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:32.262 12:01:45 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:32.262 12:01:45 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:32.262 12:01:45 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:32.262 12:01:45 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:32.262 12:01:45 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:32.262 12:01:45 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:32.262 12:01:45 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:32.262 12:01:45 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.262 12:01:45 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.262 12:01:45 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:32.262 12:01:45 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.262 12:01:45 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:32.262 12:01:45 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:32.262 12:01:45 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:32.262 12:01:45 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:32.262 12:01:45 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:32.262 12:01:45 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:32.262 12:01:45 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:32.262 12:01:45 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:32.262 #define SPDK_CONFIG_H 00:07:32.262 #define SPDK_CONFIG_APPS 1 00:07:32.262 #define SPDK_CONFIG_ARCH native 00:07:32.262 #undef SPDK_CONFIG_ASAN 00:07:32.262 #undef SPDK_CONFIG_AVAHI 00:07:32.262 #undef SPDK_CONFIG_CET 00:07:32.262 #define SPDK_CONFIG_COVERAGE 1 00:07:32.262 #define SPDK_CONFIG_CROSS_PREFIX 00:07:32.262 #undef SPDK_CONFIG_CRYPTO 00:07:32.262 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:32.262 #undef SPDK_CONFIG_CUSTOMOCF 00:07:32.262 #undef SPDK_CONFIG_DAOS 00:07:32.262 #define SPDK_CONFIG_DAOS_DIR 00:07:32.262 #define SPDK_CONFIG_DEBUG 1 00:07:32.262 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:32.262 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:32.262 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:32.262 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.262 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:32.262 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:32.262 #define SPDK_CONFIG_EXAMPLES 1 00:07:32.262 #undef SPDK_CONFIG_FC 00:07:32.262 #define SPDK_CONFIG_FC_PATH 00:07:32.262 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:32.262 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:32.262 #undef SPDK_CONFIG_FUSE 00:07:32.262 #undef SPDK_CONFIG_FUZZER 00:07:32.262 #define SPDK_CONFIG_FUZZER_LIB 00:07:32.262 #undef SPDK_CONFIG_GOLANG 00:07:32.262 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:32.262 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:32.262 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:32.262 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:32.262 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:32.262 #define SPDK_CONFIG_IDXD 1 00:07:32.262 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:32.262 #undef SPDK_CONFIG_IPSEC_MB 00:07:32.262 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:32.262 #define SPDK_CONFIG_ISAL 1 00:07:32.262 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:32.262 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:32.262 #define SPDK_CONFIG_LIBDIR 00:07:32.262 #undef SPDK_CONFIG_LTO 00:07:32.262 #define SPDK_CONFIG_MAX_LCORES 00:07:32.262 #define SPDK_CONFIG_NVME_CUSE 1 00:07:32.262 #undef SPDK_CONFIG_OCF 00:07:32.262 #define SPDK_CONFIG_OCF_PATH 00:07:32.262 #define SPDK_CONFIG_OPENSSL_PATH 00:07:32.262 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:32.262 #undef SPDK_CONFIG_PGO_USE 00:07:32.262 #define SPDK_CONFIG_PREFIX /usr/local 00:07:32.262 #undef SPDK_CONFIG_RAID5F 00:07:32.262 #undef SPDK_CONFIG_RBD 00:07:32.262 #define SPDK_CONFIG_RDMA 1 00:07:32.262 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:32.262 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:32.262 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:32.262 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:32.262 #define SPDK_CONFIG_SHARED 1 00:07:32.262 #undef SPDK_CONFIG_SMA 00:07:32.262 #define SPDK_CONFIG_TESTS 1 00:07:32.262 #undef SPDK_CONFIG_TSAN 00:07:32.262 #define SPDK_CONFIG_UBLK 1 00:07:32.262 #define SPDK_CONFIG_UBSAN 1 00:07:32.262 #undef SPDK_CONFIG_UNIT_TESTS 00:07:32.262 #undef SPDK_CONFIG_URING 00:07:32.262 #define SPDK_CONFIG_URING_PATH 00:07:32.262 #undef SPDK_CONFIG_URING_ZNS 00:07:32.262 #undef SPDK_CONFIG_USDT 00:07:32.262 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:32.262 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:32.262 #define SPDK_CONFIG_VFIO_USER 1 00:07:32.262 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:32.262 #define SPDK_CONFIG_VHOST 1 00:07:32.262 #define SPDK_CONFIG_VIRTIO 1 00:07:32.262 #undef SPDK_CONFIG_VTUNE 00:07:32.262 #define SPDK_CONFIG_VTUNE_DIR 00:07:32.262 #define SPDK_CONFIG_WERROR 1 00:07:32.262 #define SPDK_CONFIG_WPDK_DIR 00:07:32.262 #undef SPDK_CONFIG_XNVME 00:07:32.262 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:32.262 12:01:45 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:32.262 12:01:45 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.262 12:01:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.262 12:01:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.262 12:01:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.262 12:01:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.263 12:01:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.263 12:01:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.263 12:01:45 -- paths/export.sh@5 -- # export PATH 00:07:32.263 12:01:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.263 12:01:45 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.263 12:01:45 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:32.263 12:01:45 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:32.263 12:01:45 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:32.263 12:01:45 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:32.263 12:01:45 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.263 12:01:45 -- pm/common@16 -- # TEST_TAG=N/A 00:07:32.263 12:01:45 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:32.263 12:01:45 -- common/autotest_common.sh@52 -- # : 1 00:07:32.263 12:01:45 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:32.263 12:01:45 -- common/autotest_common.sh@56 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:32.263 12:01:45 -- common/autotest_common.sh@58 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:32.263 12:01:45 -- common/autotest_common.sh@60 -- # : 1 00:07:32.263 12:01:45 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:32.263 12:01:45 -- common/autotest_common.sh@62 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:32.263 12:01:45 -- common/autotest_common.sh@64 -- # : 00:07:32.263 12:01:45 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:32.263 12:01:45 -- common/autotest_common.sh@66 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:32.263 12:01:45 -- common/autotest_common.sh@68 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:32.263 12:01:45 -- common/autotest_common.sh@70 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:32.263 12:01:45 -- common/autotest_common.sh@72 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:32.263 12:01:45 -- common/autotest_common.sh@74 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:32.263 12:01:45 -- common/autotest_common.sh@76 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:32.263 12:01:45 -- common/autotest_common.sh@78 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:32.263 12:01:45 -- common/autotest_common.sh@80 -- # : 1 00:07:32.263 12:01:45 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:32.263 12:01:45 -- common/autotest_common.sh@82 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:32.263 12:01:45 -- common/autotest_common.sh@84 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:32.263 12:01:45 -- common/autotest_common.sh@86 -- # : 1 00:07:32.263 12:01:45 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:32.263 12:01:45 -- common/autotest_common.sh@88 -- # : 1 00:07:32.263 12:01:45 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:32.263 12:01:45 -- common/autotest_common.sh@90 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:32.263 12:01:45 -- common/autotest_common.sh@92 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:32.263 12:01:45 -- common/autotest_common.sh@94 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:32.263 12:01:45 -- common/autotest_common.sh@96 -- # : tcp 00:07:32.263 12:01:45 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:32.263 12:01:45 -- common/autotest_common.sh@98 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:32.263 12:01:45 -- common/autotest_common.sh@100 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:32.263 12:01:45 -- common/autotest_common.sh@102 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:32.263 12:01:45 -- common/autotest_common.sh@104 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:32.263 12:01:45 -- common/autotest_common.sh@106 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:32.263 12:01:45 -- common/autotest_common.sh@108 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:32.263 12:01:45 -- common/autotest_common.sh@110 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:32.263 12:01:45 -- common/autotest_common.sh@112 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:32.263 12:01:45 -- common/autotest_common.sh@114 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:32.263 12:01:45 -- common/autotest_common.sh@116 -- # : 1 00:07:32.263 12:01:45 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:32.263 12:01:45 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:32.263 12:01:45 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:32.263 12:01:45 -- common/autotest_common.sh@120 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:32.263 12:01:45 -- common/autotest_common.sh@122 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:32.263 12:01:45 -- common/autotest_common.sh@124 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:32.263 12:01:45 -- common/autotest_common.sh@126 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:32.263 12:01:45 -- common/autotest_common.sh@128 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:32.263 12:01:45 -- common/autotest_common.sh@130 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:32.263 12:01:45 -- common/autotest_common.sh@132 -- # : v23.11 00:07:32.263 12:01:45 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:32.263 12:01:45 -- common/autotest_common.sh@134 -- # : true 00:07:32.263 12:01:45 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:32.263 12:01:45 -- common/autotest_common.sh@136 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:32.263 12:01:45 -- common/autotest_common.sh@138 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:32.263 12:01:45 -- common/autotest_common.sh@140 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:32.263 12:01:45 -- common/autotest_common.sh@142 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:32.263 12:01:45 -- common/autotest_common.sh@144 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:32.263 12:01:45 -- common/autotest_common.sh@146 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:32.263 12:01:45 -- common/autotest_common.sh@148 -- # : e810 00:07:32.263 12:01:45 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:32.263 12:01:45 -- common/autotest_common.sh@150 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:32.263 12:01:45 -- common/autotest_common.sh@152 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:32.263 12:01:45 -- common/autotest_common.sh@154 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:32.263 12:01:45 -- common/autotest_common.sh@156 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:32.263 12:01:45 -- common/autotest_common.sh@158 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:32.263 12:01:45 -- common/autotest_common.sh@160 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:32.263 12:01:45 -- common/autotest_common.sh@163 -- # : 00:07:32.263 12:01:45 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:32.263 12:01:45 -- common/autotest_common.sh@165 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:32.263 12:01:45 -- common/autotest_common.sh@167 -- # : 0 00:07:32.263 12:01:45 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:32.263 12:01:45 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:32.263 12:01:45 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:32.263 12:01:45 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.263 12:01:45 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:32.263 12:01:45 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.264 12:01:45 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.264 12:01:45 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.264 12:01:45 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:32.264 12:01:45 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.264 12:01:45 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:32.264 12:01:45 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.264 12:01:45 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.264 12:01:45 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:32.264 12:01:45 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:32.264 12:01:45 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.264 12:01:45 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:32.264 12:01:45 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.264 12:01:45 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:32.264 12:01:45 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:32.264 12:01:45 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:32.264 12:01:45 -- common/autotest_common.sh@196 -- # cat 00:07:32.264 12:01:45 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:32.264 12:01:45 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.264 12:01:45 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:32.264 12:01:45 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.264 12:01:45 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:32.264 12:01:45 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:32.264 12:01:45 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:32.264 12:01:45 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.264 12:01:45 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:32.264 12:01:45 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.264 12:01:45 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:32.264 12:01:45 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.264 12:01:45 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:32.264 12:01:45 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.264 12:01:45 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:32.264 12:01:45 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.264 12:01:45 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:32.264 12:01:45 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.264 12:01:45 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:32.264 12:01:45 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:32.264 12:01:45 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:32.264 12:01:45 -- common/autotest_common.sh@249 -- # valgrind= 00:07:32.264 12:01:45 -- common/autotest_common.sh@255 -- # uname -s 00:07:32.264 12:01:45 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:32.264 12:01:45 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:32.264 12:01:45 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:32.264 12:01:45 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:32.264 12:01:45 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:32.264 12:01:45 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:32.264 12:01:45 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:32.264 12:01:45 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:32.264 12:01:45 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:32.264 12:01:45 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:32.264 12:01:45 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:32.264 12:01:45 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:32.264 12:01:45 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:32.264 12:01:45 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:32.264 12:01:45 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:32.264 12:01:45 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:32.264 12:01:45 -- common/autotest_common.sh@309 -- # [[ -z 1293456 ]] 00:07:32.264 12:01:45 -- common/autotest_common.sh@309 -- # kill -0 1293456 00:07:32.264 12:01:45 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:32.264 12:01:45 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:32.264 12:01:45 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:32.264 12:01:45 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:32.264 12:01:45 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:32.264 12:01:45 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:32.264 12:01:45 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:32.264 12:01:45 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:32.264 12:01:45 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.J0Us8I 00:07:32.264 12:01:45 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:32.264 12:01:45 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:32.264 12:01:45 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:32.264 12:01:45 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.J0Us8I/tests/target /tmp/spdk.J0Us8I 00:07:32.264 12:01:45 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:32.264 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.264 12:01:45 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:32.264 12:01:45 -- common/autotest_common.sh@318 -- # df -T 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:32.264 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:32.264 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=957403136 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:32.264 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=4327026688 00:07:32.264 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=122361163776 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129370984448 00:07:32.264 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=7009820672 00:07:32.264 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=64684232704 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685490176 00:07:32.264 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:32.264 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=25864450048 00:07:32.264 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25874198528 00:07:32.264 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=9748480 00:07:32.264 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.264 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:32.265 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:32.265 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=179200 00:07:32.265 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:32.265 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=324608 00:07:32.265 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.265 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:32.265 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:32.265 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=64685133824 00:07:32.265 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685494272 00:07:32.265 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=360448 00:07:32.265 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.265 12:01:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:32.265 12:01:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:32.265 12:01:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937093120 00:07:32.265 12:01:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937097216 00:07:32.265 12:01:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:32.265 12:01:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:32.265 12:01:45 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:32.265 * Looking for test storage... 00:07:32.265 12:01:45 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:32.265 12:01:45 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:32.265 12:01:45 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.265 12:01:45 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:32.265 12:01:45 -- common/autotest_common.sh@363 -- # mount=/ 00:07:32.265 12:01:45 -- common/autotest_common.sh@365 -- # target_space=122361163776 00:07:32.265 12:01:45 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:32.265 12:01:45 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:32.265 12:01:45 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:32.265 12:01:45 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:32.265 12:01:45 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:32.265 12:01:45 -- common/autotest_common.sh@372 -- # new_size=9224413184 00:07:32.265 12:01:45 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:32.265 12:01:45 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.265 12:01:45 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.265 12:01:45 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.265 12:01:45 -- common/autotest_common.sh@380 -- # return 0 00:07:32.265 12:01:45 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:32.265 12:01:45 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:32.265 12:01:45 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:32.265 12:01:45 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:32.265 12:01:45 -- common/autotest_common.sh@1672 -- # true 00:07:32.265 12:01:45 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:32.265 12:01:45 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:32.265 12:01:45 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:32.265 12:01:45 -- common/autotest_common.sh@27 -- # exec 00:07:32.265 12:01:45 -- common/autotest_common.sh@29 -- # exec 00:07:32.265 12:01:45 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:32.265 12:01:45 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:32.265 12:01:45 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:32.265 12:01:45 -- common/autotest_common.sh@18 -- # set -x 00:07:32.265 12:01:45 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.265 12:01:45 -- nvmf/common.sh@7 -- # uname -s 00:07:32.265 12:01:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.265 12:01:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.265 12:01:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.265 12:01:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.265 12:01:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.265 12:01:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.265 12:01:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.265 12:01:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.265 12:01:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.526 12:01:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.526 12:01:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:32.526 12:01:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:32.526 12:01:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.526 12:01:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.526 12:01:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.526 12:01:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.526 12:01:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.526 12:01:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.526 12:01:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.526 12:01:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.526 12:01:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.527 12:01:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.527 12:01:45 -- paths/export.sh@5 -- # export PATH 00:07:32.527 12:01:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.527 12:01:45 -- nvmf/common.sh@46 -- # : 0 00:07:32.527 12:01:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:32.527 12:01:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:32.527 12:01:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:32.527 12:01:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.527 12:01:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.527 12:01:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:32.527 12:01:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:32.527 12:01:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:32.527 12:01:45 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:32.527 12:01:45 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:32.527 12:01:45 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:32.527 12:01:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:32.527 12:01:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.527 12:01:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:32.527 12:01:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:32.527 12:01:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:32.527 12:01:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.527 12:01:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.527 12:01:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.527 12:01:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:32.527 12:01:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:32.527 12:01:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:32.527 12:01:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.662 12:01:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:40.662 12:01:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:40.662 12:01:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:40.662 12:01:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:40.662 12:01:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:40.662 12:01:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:40.662 12:01:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:40.662 12:01:52 -- nvmf/common.sh@294 -- # net_devs=() 00:07:40.662 12:01:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:40.662 12:01:52 -- nvmf/common.sh@295 -- # e810=() 00:07:40.662 12:01:52 -- nvmf/common.sh@295 -- # local -ga e810 00:07:40.662 12:01:52 -- nvmf/common.sh@296 -- # x722=() 00:07:40.662 12:01:52 -- nvmf/common.sh@296 -- # local -ga x722 00:07:40.662 12:01:52 -- nvmf/common.sh@297 -- # mlx=() 00:07:40.662 12:01:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:40.662 12:01:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.662 12:01:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:40.662 12:01:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:40.662 12:01:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:40.662 12:01:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:40.662 12:01:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:40.662 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:40.662 12:01:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:40.662 12:01:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:40.662 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:40.662 12:01:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:40.662 12:01:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:40.662 12:01:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.662 12:01:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:40.662 12:01:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.662 12:01:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:40.662 Found net devices under 0000:31:00.0: cvl_0_0 00:07:40.662 12:01:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.662 12:01:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:40.662 12:01:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.662 12:01:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:40.662 12:01:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.662 12:01:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:40.662 Found net devices under 0000:31:00.1: cvl_0_1 00:07:40.662 12:01:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.662 12:01:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:40.662 12:01:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:40.662 12:01:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:40.662 12:01:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.662 12:01:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.662 12:01:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.662 12:01:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:40.662 12:01:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.662 12:01:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.662 12:01:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:40.662 12:01:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.662 12:01:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.662 12:01:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:40.662 12:01:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:40.662 12:01:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.662 12:01:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.662 12:01:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.662 12:01:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.662 12:01:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:40.662 12:01:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.662 12:01:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.662 12:01:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.662 12:01:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:40.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:07:40.662 00:07:40.662 --- 10.0.0.2 ping statistics --- 00:07:40.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.662 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:07:40.662 12:01:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:07:40.662 00:07:40.662 --- 10.0.0.1 ping statistics --- 00:07:40.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.662 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:07:40.662 12:01:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.662 12:01:52 -- nvmf/common.sh@410 -- # return 0 00:07:40.662 12:01:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:40.662 12:01:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.662 12:01:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:40.662 12:01:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.662 12:01:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:40.662 12:01:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:40.662 12:01:52 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:40.662 12:01:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:40.662 12:01:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.662 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:07:40.662 ************************************ 00:07:40.662 START TEST nvmf_filesystem_no_in_capsule 00:07:40.662 ************************************ 00:07:40.662 12:01:52 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:40.662 12:01:52 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:40.662 12:01:52 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:40.662 12:01:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:40.663 12:01:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:40.663 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 12:01:52 -- nvmf/common.sh@469 -- # nvmfpid=1297349 00:07:40.663 12:01:52 -- nvmf/common.sh@470 -- # waitforlisten 1297349 00:07:40.663 12:01:52 -- common/autotest_common.sh@819 -- # '[' -z 1297349 ']' 00:07:40.663 12:01:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.663 12:01:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:40.663 12:01:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.663 12:01:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:40.663 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 12:01:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.663 [2024-06-11 12:01:52.611244] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:40.663 [2024-06-11 12:01:52.611306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.663 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.663 [2024-06-11 12:01:52.682445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.663 [2024-06-11 12:01:52.721642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.663 [2024-06-11 12:01:52.721783] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.663 [2024-06-11 12:01:52.721794] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.663 [2024-06-11 12:01:52.721803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.663 [2024-06-11 12:01:52.721990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.663 [2024-06-11 12:01:52.722162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.663 [2024-06-11 12:01:52.722162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.663 [2024-06-11 12:01:52.722061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.663 12:01:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:40.663 12:01:53 -- common/autotest_common.sh@852 -- # return 0 00:07:40.663 12:01:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:40.663 12:01:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:40.663 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 12:01:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.663 12:01:53 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:40.663 12:01:53 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:40.663 12:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.663 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 [2024-06-11 12:01:53.431277] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.663 12:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.663 12:01:53 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:40.663 12:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.663 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 Malloc1 00:07:40.663 12:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.663 12:01:53 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.663 12:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.663 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 12:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.663 12:01:53 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.663 12:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.663 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 12:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.663 12:01:53 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.663 12:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.663 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 [2024-06-11 12:01:53.558203] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.663 12:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.663 12:01:53 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:40.663 12:01:53 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:40.663 12:01:53 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:40.663 12:01:53 -- common/autotest_common.sh@1359 -- # local bs 00:07:40.663 12:01:53 -- common/autotest_common.sh@1360 -- # local nb 00:07:40.663 12:01:53 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:40.663 12:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.663 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.663 12:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.663 12:01:53 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:40.663 { 00:07:40.663 "name": "Malloc1", 00:07:40.663 "aliases": [ 00:07:40.663 "d41bffa6-1c27-4fb6-a251-ea656df86a27" 00:07:40.663 ], 00:07:40.663 "product_name": "Malloc disk", 00:07:40.663 "block_size": 512, 00:07:40.663 "num_blocks": 1048576, 00:07:40.663 "uuid": "d41bffa6-1c27-4fb6-a251-ea656df86a27", 00:07:40.663 "assigned_rate_limits": { 00:07:40.663 "rw_ios_per_sec": 0, 00:07:40.663 "rw_mbytes_per_sec": 0, 00:07:40.663 "r_mbytes_per_sec": 0, 00:07:40.663 "w_mbytes_per_sec": 0 00:07:40.663 }, 00:07:40.663 "claimed": true, 00:07:40.663 "claim_type": "exclusive_write", 00:07:40.663 "zoned": false, 00:07:40.663 "supported_io_types": { 00:07:40.663 "read": true, 00:07:40.663 "write": true, 00:07:40.663 "unmap": true, 00:07:40.663 "write_zeroes": true, 00:07:40.663 "flush": true, 00:07:40.663 "reset": true, 00:07:40.663 "compare": false, 00:07:40.663 "compare_and_write": false, 00:07:40.663 "abort": true, 00:07:40.663 "nvme_admin": false, 00:07:40.663 "nvme_io": false 00:07:40.663 }, 00:07:40.663 "memory_domains": [ 00:07:40.663 { 00:07:40.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.663 "dma_device_type": 2 00:07:40.663 } 00:07:40.663 ], 00:07:40.663 "driver_specific": {} 00:07:40.663 } 00:07:40.663 ]' 00:07:40.663 12:01:53 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:40.663 12:01:53 -- common/autotest_common.sh@1362 -- # bs=512 00:07:40.663 12:01:53 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:40.663 12:01:53 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:40.663 12:01:53 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:40.663 12:01:53 -- common/autotest_common.sh@1367 -- # echo 512 00:07:40.663 12:01:53 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:40.663 12:01:53 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:42.572 12:01:55 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:42.572 12:01:55 -- common/autotest_common.sh@1177 -- # local i=0 00:07:42.572 12:01:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:42.572 12:01:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:42.572 12:01:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:44.482 12:01:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:44.482 12:01:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:44.482 12:01:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.482 12:01:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:44.482 12:01:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.482 12:01:57 -- common/autotest_common.sh@1187 -- # return 0 00:07:44.482 12:01:57 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:44.482 12:01:57 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:44.482 12:01:57 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:44.482 12:01:57 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:44.482 12:01:57 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.482 12:01:57 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.482 12:01:57 -- setup/common.sh@80 -- # echo 536870912 00:07:44.482 12:01:57 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:44.482 12:01:57 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:44.482 12:01:57 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:44.482 12:01:57 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:44.743 12:01:57 -- target/filesystem.sh@69 -- # partprobe 00:07:45.003 12:01:57 -- target/filesystem.sh@70 -- # sleep 1 00:07:46.048 12:01:58 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:46.048 12:01:58 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:46.048 12:01:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:46.048 12:01:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.048 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:07:46.048 ************************************ 00:07:46.048 START TEST filesystem_ext4 00:07:46.048 ************************************ 00:07:46.048 12:01:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:46.048 12:01:58 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:46.048 12:01:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.048 12:01:58 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:46.048 12:01:58 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:46.048 12:01:58 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:46.048 12:01:58 -- common/autotest_common.sh@904 -- # local i=0 00:07:46.048 12:01:58 -- common/autotest_common.sh@905 -- # local force 00:07:46.048 12:01:58 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:46.048 12:01:58 -- common/autotest_common.sh@908 -- # force=-F 00:07:46.048 12:01:58 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:46.048 mke2fs 1.46.5 (30-Dec-2021) 00:07:46.048 Discarding device blocks: 0/522240 done 00:07:46.048 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:46.048 Filesystem UUID: 0c0a3f6c-8a86-4442-af90-1243ebb37527 00:07:46.048 Superblock backups stored on blocks: 00:07:46.048 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:46.048 00:07:46.048 Allocating group tables: 0/64 done 00:07:46.048 Writing inode tables: 0/64 done 00:07:46.308 Creating journal (8192 blocks): done 00:07:47.248 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:47.248 00:07:47.248 12:02:00 -- common/autotest_common.sh@921 -- # return 0 00:07:47.248 12:02:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.510 12:02:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.771 12:02:00 -- target/filesystem.sh@25 -- # sync 00:07:47.771 12:02:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.771 12:02:00 -- target/filesystem.sh@27 -- # sync 00:07:47.771 12:02:00 -- target/filesystem.sh@29 -- # i=0 00:07:47.771 12:02:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.771 12:02:00 -- target/filesystem.sh@37 -- # kill -0 1297349 00:07:47.771 12:02:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.771 12:02:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.771 12:02:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.771 12:02:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.771 00:07:47.771 real 0m1.764s 00:07:47.771 user 0m0.031s 00:07:47.771 sys 0m0.043s 00:07:47.771 12:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.771 12:02:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.771 ************************************ 00:07:47.771 END TEST filesystem_ext4 00:07:47.771 ************************************ 00:07:47.771 12:02:00 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.771 12:02:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:47.771 12:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.771 12:02:00 -- common/autotest_common.sh@10 -- # set +x 00:07:47.771 ************************************ 00:07:47.771 START TEST filesystem_btrfs 00:07:47.771 ************************************ 00:07:47.771 12:02:00 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:47.771 12:02:00 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:47.771 12:02:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.771 12:02:00 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:47.771 12:02:00 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:47.771 12:02:00 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:47.771 12:02:00 -- common/autotest_common.sh@904 -- # local i=0 00:07:47.771 12:02:00 -- common/autotest_common.sh@905 -- # local force 00:07:47.771 12:02:00 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:47.771 12:02:00 -- common/autotest_common.sh@910 -- # force=-f 00:07:47.771 12:02:00 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.342 btrfs-progs v6.6.2 00:07:48.342 See https://btrfs.readthedocs.io for more information. 00:07:48.342 00:07:48.343 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.343 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.343 this does not affect your deployments: 00:07:48.343 - DUP for metadata (-m dup) 00:07:48.343 - enabled no-holes (-O no-holes) 00:07:48.343 - enabled free-space-tree (-R free-space-tree) 00:07:48.343 00:07:48.343 Label: (null) 00:07:48.343 UUID: ca96803d-cce7-46ee-bfc7-7651afb1a65a 00:07:48.343 Node size: 16384 00:07:48.343 Sector size: 4096 00:07:48.343 Filesystem size: 510.00MiB 00:07:48.343 Block group profiles: 00:07:48.343 Data: single 8.00MiB 00:07:48.343 Metadata: DUP 32.00MiB 00:07:48.343 System: DUP 8.00MiB 00:07:48.343 SSD detected: yes 00:07:48.343 Zoned device: no 00:07:48.343 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.343 Runtime features: free-space-tree 00:07:48.343 Checksum: crc32c 00:07:48.343 Number of devices: 1 00:07:48.343 Devices: 00:07:48.343 ID SIZE PATH 00:07:48.343 1 510.00MiB /dev/nvme0n1p1 00:07:48.343 00:07:48.343 12:02:01 -- common/autotest_common.sh@921 -- # return 0 00:07:48.343 12:02:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.603 12:02:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.603 12:02:01 -- target/filesystem.sh@25 -- # sync 00:07:48.603 12:02:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.603 12:02:01 -- target/filesystem.sh@27 -- # sync 00:07:48.603 12:02:01 -- target/filesystem.sh@29 -- # i=0 00:07:48.603 12:02:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.603 12:02:01 -- target/filesystem.sh@37 -- # kill -0 1297349 00:07:48.603 12:02:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.603 12:02:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.603 12:02:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.603 12:02:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.603 00:07:48.603 real 0m0.889s 00:07:48.603 user 0m0.031s 00:07:48.603 sys 0m0.059s 00:07:48.603 12:02:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.603 12:02:01 -- common/autotest_common.sh@10 -- # set +x 00:07:48.603 ************************************ 00:07:48.603 END TEST filesystem_btrfs 00:07:48.603 ************************************ 00:07:48.603 12:02:01 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:48.603 12:02:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:48.603 12:02:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.603 12:02:01 -- common/autotest_common.sh@10 -- # set +x 00:07:48.603 ************************************ 00:07:48.603 START TEST filesystem_xfs 00:07:48.603 ************************************ 00:07:48.603 12:02:01 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:48.603 12:02:01 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:48.603 12:02:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.603 12:02:01 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:48.603 12:02:01 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:48.603 12:02:01 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:48.603 12:02:01 -- common/autotest_common.sh@904 -- # local i=0 00:07:48.603 12:02:01 -- common/autotest_common.sh@905 -- # local force 00:07:48.603 12:02:01 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:48.603 12:02:01 -- common/autotest_common.sh@910 -- # force=-f 00:07:48.603 12:02:01 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:48.864 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:48.864 = sectsz=512 attr=2, projid32bit=1 00:07:48.864 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:48.864 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:48.864 data = bsize=4096 blocks=130560, imaxpct=25 00:07:48.864 = sunit=0 swidth=0 blks 00:07:48.864 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:48.864 log =internal log bsize=4096 blocks=16384, version=2 00:07:48.864 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:48.864 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:49.802 Discarding blocks...Done. 00:07:49.802 12:02:02 -- common/autotest_common.sh@921 -- # return 0 00:07:49.802 12:02:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.707 12:02:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.707 12:02:04 -- target/filesystem.sh@25 -- # sync 00:07:51.707 12:02:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.707 12:02:04 -- target/filesystem.sh@27 -- # sync 00:07:51.707 12:02:04 -- target/filesystem.sh@29 -- # i=0 00:07:51.707 12:02:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.707 12:02:04 -- target/filesystem.sh@37 -- # kill -0 1297349 00:07:51.707 12:02:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.707 12:02:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.707 12:02:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.707 12:02:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.707 00:07:51.707 real 0m2.826s 00:07:51.707 user 0m0.022s 00:07:51.707 sys 0m0.058s 00:07:51.707 12:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.707 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:07:51.707 ************************************ 00:07:51.707 END TEST filesystem_xfs 00:07:51.707 ************************************ 00:07:51.707 12:02:04 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:51.707 12:02:04 -- target/filesystem.sh@93 -- # sync 00:07:51.966 12:02:04 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.966 12:02:04 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.966 12:02:04 -- common/autotest_common.sh@1198 -- # local i=0 00:07:51.966 12:02:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:51.966 12:02:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.966 12:02:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:51.966 12:02:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.966 12:02:04 -- common/autotest_common.sh@1210 -- # return 0 00:07:51.966 12:02:04 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.966 12:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.966 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:07:52.227 12:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:52.227 12:02:05 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:52.227 12:02:05 -- target/filesystem.sh@101 -- # killprocess 1297349 00:07:52.227 12:02:05 -- common/autotest_common.sh@926 -- # '[' -z 1297349 ']' 00:07:52.227 12:02:05 -- common/autotest_common.sh@930 -- # kill -0 1297349 00:07:52.227 12:02:05 -- common/autotest_common.sh@931 -- # uname 00:07:52.227 12:02:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:52.227 12:02:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1297349 00:07:52.227 12:02:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:52.227 12:02:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:52.227 12:02:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1297349' 00:07:52.227 killing process with pid 1297349 00:07:52.227 12:02:05 -- common/autotest_common.sh@945 -- # kill 1297349 00:07:52.227 12:02:05 -- common/autotest_common.sh@950 -- # wait 1297349 00:07:52.487 12:02:05 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:52.487 00:07:52.487 real 0m12.724s 00:07:52.487 user 0m50.239s 00:07:52.487 sys 0m1.005s 00:07:52.487 12:02:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.487 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.487 ************************************ 00:07:52.487 END TEST nvmf_filesystem_no_in_capsule 00:07:52.487 ************************************ 00:07:52.487 12:02:05 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:52.487 12:02:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:52.487 12:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.487 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.487 ************************************ 00:07:52.487 START TEST nvmf_filesystem_in_capsule 00:07:52.487 ************************************ 00:07:52.487 12:02:05 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:52.487 12:02:05 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:52.487 12:02:05 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:52.487 12:02:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:52.487 12:02:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:52.487 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.487 12:02:05 -- nvmf/common.sh@469 -- # nvmfpid=1299959 00:07:52.487 12:02:05 -- nvmf/common.sh@470 -- # waitforlisten 1299959 00:07:52.487 12:02:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.487 12:02:05 -- common/autotest_common.sh@819 -- # '[' -z 1299959 ']' 00:07:52.487 12:02:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.487 12:02:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:52.487 12:02:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.487 12:02:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:52.487 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.487 [2024-06-11 12:02:05.380939] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:52.487 [2024-06-11 12:02:05.380997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.487 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.487 [2024-06-11 12:02:05.448304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.487 [2024-06-11 12:02:05.481504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:52.487 [2024-06-11 12:02:05.481647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.487 [2024-06-11 12:02:05.481658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.487 [2024-06-11 12:02:05.481669] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.487 [2024-06-11 12:02:05.481843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.487 [2024-06-11 12:02:05.481974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.487 [2024-06-11 12:02:05.482142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.487 [2024-06-11 12:02:05.482239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.429 12:02:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:53.429 12:02:06 -- common/autotest_common.sh@852 -- # return 0 00:07:53.429 12:02:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:53.429 12:02:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:53.429 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 12:02:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.429 12:02:06 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:53.429 12:02:06 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:53.429 12:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.429 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 [2024-06-11 12:02:06.193383] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.429 12:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.429 12:02:06 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:53.429 12:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.429 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 Malloc1 00:07:53.429 12:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.429 12:02:06 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.429 12:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.429 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 12:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.429 12:02:06 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.429 12:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.429 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 12:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.429 12:02:06 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.429 12:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.429 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 [2024-06-11 12:02:06.319934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.429 12:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.429 12:02:06 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:53.429 12:02:06 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:53.429 12:02:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:53.429 12:02:06 -- common/autotest_common.sh@1359 -- # local bs 00:07:53.429 12:02:06 -- common/autotest_common.sh@1360 -- # local nb 00:07:53.429 12:02:06 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:53.429 12:02:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:53.429 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:07:53.429 12:02:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:53.429 12:02:06 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:53.429 { 00:07:53.429 "name": "Malloc1", 00:07:53.429 "aliases": [ 00:07:53.429 "96b7fe1e-aced-43f9-b8e8-17a5074dd258" 00:07:53.429 ], 00:07:53.429 "product_name": "Malloc disk", 00:07:53.429 "block_size": 512, 00:07:53.429 "num_blocks": 1048576, 00:07:53.429 "uuid": "96b7fe1e-aced-43f9-b8e8-17a5074dd258", 00:07:53.429 "assigned_rate_limits": { 00:07:53.429 "rw_ios_per_sec": 0, 00:07:53.429 "rw_mbytes_per_sec": 0, 00:07:53.429 "r_mbytes_per_sec": 0, 00:07:53.429 "w_mbytes_per_sec": 0 00:07:53.429 }, 00:07:53.429 "claimed": true, 00:07:53.429 "claim_type": "exclusive_write", 00:07:53.429 "zoned": false, 00:07:53.429 "supported_io_types": { 00:07:53.429 "read": true, 00:07:53.429 "write": true, 00:07:53.429 "unmap": true, 00:07:53.429 "write_zeroes": true, 00:07:53.429 "flush": true, 00:07:53.429 "reset": true, 00:07:53.429 "compare": false, 00:07:53.429 "compare_and_write": false, 00:07:53.429 "abort": true, 00:07:53.429 "nvme_admin": false, 00:07:53.429 "nvme_io": false 00:07:53.429 }, 00:07:53.429 "memory_domains": [ 00:07:53.429 { 00:07:53.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.429 "dma_device_type": 2 00:07:53.429 } 00:07:53.429 ], 00:07:53.429 "driver_specific": {} 00:07:53.429 } 00:07:53.429 ]' 00:07:53.429 12:02:06 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:53.429 12:02:06 -- common/autotest_common.sh@1362 -- # bs=512 00:07:53.429 12:02:06 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:53.429 12:02:06 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:53.429 12:02:06 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:53.429 12:02:06 -- common/autotest_common.sh@1367 -- # echo 512 00:07:53.429 12:02:06 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:53.429 12:02:06 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.339 12:02:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.339 12:02:07 -- common/autotest_common.sh@1177 -- # local i=0 00:07:55.339 12:02:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.339 12:02:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:55.339 12:02:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:57.250 12:02:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:57.250 12:02:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:57.250 12:02:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.250 12:02:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:57.250 12:02:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.250 12:02:09 -- common/autotest_common.sh@1187 -- # return 0 00:07:57.250 12:02:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.250 12:02:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.250 12:02:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.250 12:02:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.250 12:02:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.250 12:02:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.250 12:02:09 -- setup/common.sh@80 -- # echo 536870912 00:07:57.250 12:02:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.250 12:02:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.250 12:02:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.250 12:02:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:57.511 12:02:10 -- target/filesystem.sh@69 -- # partprobe 00:07:58.081 12:02:10 -- target/filesystem.sh@70 -- # sleep 1 00:07:59.020 12:02:11 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:59.020 12:02:11 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:59.020 12:02:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:59.020 12:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.020 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:59.020 ************************************ 00:07:59.020 START TEST filesystem_in_capsule_ext4 00:07:59.020 ************************************ 00:07:59.020 12:02:11 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:59.020 12:02:11 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:59.020 12:02:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.020 12:02:11 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:59.020 12:02:11 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:59.020 12:02:11 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:59.020 12:02:11 -- common/autotest_common.sh@904 -- # local i=0 00:07:59.020 12:02:11 -- common/autotest_common.sh@905 -- # local force 00:07:59.020 12:02:11 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:59.020 12:02:11 -- common/autotest_common.sh@908 -- # force=-F 00:07:59.020 12:02:11 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:59.020 mke2fs 1.46.5 (30-Dec-2021) 00:07:59.020 Discarding device blocks: 0/522240 done 00:07:59.020 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:59.020 Filesystem UUID: 1b89aa60-5c28-4efb-9459-2cae0418df00 00:07:59.020 Superblock backups stored on blocks: 00:07:59.020 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:59.020 00:07:59.021 Allocating group tables: 0/64 done 00:07:59.021 Writing inode tables: 0/64 done 00:07:59.281 Creating journal (8192 blocks): done 00:07:59.281 Writing superblocks and filesystem accounting information: 0/64 done 00:07:59.281 00:07:59.281 12:02:12 -- common/autotest_common.sh@921 -- # return 0 00:07:59.281 12:02:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.541 12:02:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.541 12:02:12 -- target/filesystem.sh@25 -- # sync 00:07:59.541 12:02:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.541 12:02:12 -- target/filesystem.sh@27 -- # sync 00:07:59.541 12:02:12 -- target/filesystem.sh@29 -- # i=0 00:07:59.541 12:02:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.541 12:02:12 -- target/filesystem.sh@37 -- # kill -0 1299959 00:07:59.541 12:02:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.541 12:02:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.541 12:02:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.541 12:02:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.541 00:07:59.541 real 0m0.587s 00:07:59.541 user 0m0.024s 00:07:59.541 sys 0m0.046s 00:07:59.541 12:02:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.541 12:02:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.541 ************************************ 00:07:59.541 END TEST filesystem_in_capsule_ext4 00:07:59.541 ************************************ 00:07:59.541 12:02:12 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:59.541 12:02:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:59.541 12:02:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.541 12:02:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.541 ************************************ 00:07:59.541 START TEST filesystem_in_capsule_btrfs 00:07:59.541 ************************************ 00:07:59.541 12:02:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:59.541 12:02:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:59.541 12:02:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.542 12:02:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:59.542 12:02:12 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:59.802 12:02:12 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:59.802 12:02:12 -- common/autotest_common.sh@904 -- # local i=0 00:07:59.802 12:02:12 -- common/autotest_common.sh@905 -- # local force 00:07:59.802 12:02:12 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:59.802 12:02:12 -- common/autotest_common.sh@910 -- # force=-f 00:07:59.802 12:02:12 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:00.064 btrfs-progs v6.6.2 00:08:00.064 See https://btrfs.readthedocs.io for more information. 00:08:00.064 00:08:00.064 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:00.064 NOTE: several default settings have changed in version 5.15, please make sure 00:08:00.064 this does not affect your deployments: 00:08:00.064 - DUP for metadata (-m dup) 00:08:00.064 - enabled no-holes (-O no-holes) 00:08:00.064 - enabled free-space-tree (-R free-space-tree) 00:08:00.064 00:08:00.064 Label: (null) 00:08:00.064 UUID: 8b4f9e29-2d57-47e5-9427-e32dfa0e82db 00:08:00.064 Node size: 16384 00:08:00.064 Sector size: 4096 00:08:00.064 Filesystem size: 510.00MiB 00:08:00.064 Block group profiles: 00:08:00.064 Data: single 8.00MiB 00:08:00.064 Metadata: DUP 32.00MiB 00:08:00.064 System: DUP 8.00MiB 00:08:00.064 SSD detected: yes 00:08:00.064 Zoned device: no 00:08:00.064 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:00.064 Runtime features: free-space-tree 00:08:00.064 Checksum: crc32c 00:08:00.064 Number of devices: 1 00:08:00.064 Devices: 00:08:00.064 ID SIZE PATH 00:08:00.064 1 510.00MiB /dev/nvme0n1p1 00:08:00.064 00:08:00.064 12:02:13 -- common/autotest_common.sh@921 -- # return 0 00:08:00.064 12:02:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.635 12:02:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.635 12:02:13 -- target/filesystem.sh@25 -- # sync 00:08:00.635 12:02:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.635 12:02:13 -- target/filesystem.sh@27 -- # sync 00:08:00.635 12:02:13 -- target/filesystem.sh@29 -- # i=0 00:08:00.635 12:02:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.635 12:02:13 -- target/filesystem.sh@37 -- # kill -0 1299959 00:08:00.635 12:02:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.636 12:02:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.636 12:02:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.636 12:02:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.636 00:08:00.636 real 0m1.047s 00:08:00.636 user 0m0.025s 00:08:00.636 sys 0m0.067s 00:08:00.636 12:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.636 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:08:00.636 ************************************ 00:08:00.636 END TEST filesystem_in_capsule_btrfs 00:08:00.636 ************************************ 00:08:00.636 12:02:13 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:00.636 12:02:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:00.636 12:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.636 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:08:00.636 ************************************ 00:08:00.636 START TEST filesystem_in_capsule_xfs 00:08:00.636 ************************************ 00:08:00.636 12:02:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:00.636 12:02:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:00.636 12:02:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.636 12:02:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:00.636 12:02:13 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:00.636 12:02:13 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:00.636 12:02:13 -- common/autotest_common.sh@904 -- # local i=0 00:08:00.636 12:02:13 -- common/autotest_common.sh@905 -- # local force 00:08:00.636 12:02:13 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:00.894 12:02:13 -- common/autotest_common.sh@910 -- # force=-f 00:08:00.894 12:02:13 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:00.894 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:00.894 = sectsz=512 attr=2, projid32bit=1 00:08:00.894 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:00.894 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:00.894 data = bsize=4096 blocks=130560, imaxpct=25 00:08:00.894 = sunit=0 swidth=0 blks 00:08:00.894 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:00.894 log =internal log bsize=4096 blocks=16384, version=2 00:08:00.895 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:00.895 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:01.836 Discarding blocks...Done. 00:08:01.836 12:02:14 -- common/autotest_common.sh@921 -- # return 0 00:08:01.836 12:02:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.752 12:02:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.752 12:02:16 -- target/filesystem.sh@25 -- # sync 00:08:03.752 12:02:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.752 12:02:16 -- target/filesystem.sh@27 -- # sync 00:08:03.752 12:02:16 -- target/filesystem.sh@29 -- # i=0 00:08:03.752 12:02:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.752 12:02:16 -- target/filesystem.sh@37 -- # kill -0 1299959 00:08:03.752 12:02:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.752 12:02:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.752 12:02:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.752 12:02:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.752 00:08:03.752 real 0m2.834s 00:08:03.752 user 0m0.023s 00:08:03.752 sys 0m0.055s 00:08:03.752 12:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.752 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:08:03.752 ************************************ 00:08:03.752 END TEST filesystem_in_capsule_xfs 00:08:03.752 ************************************ 00:08:03.752 12:02:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:03.752 12:02:16 -- target/filesystem.sh@93 -- # sync 00:08:03.752 12:02:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.752 12:02:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.752 12:02:16 -- common/autotest_common.sh@1198 -- # local i=0 00:08:03.752 12:02:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:03.752 12:02:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.752 12:02:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:03.752 12:02:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.752 12:02:16 -- common/autotest_common.sh@1210 -- # return 0 00:08:03.752 12:02:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.752 12:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.752 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:08:03.752 12:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.752 12:02:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:03.752 12:02:16 -- target/filesystem.sh@101 -- # killprocess 1299959 00:08:03.752 12:02:16 -- common/autotest_common.sh@926 -- # '[' -z 1299959 ']' 00:08:03.752 12:02:16 -- common/autotest_common.sh@930 -- # kill -0 1299959 00:08:03.752 12:02:16 -- common/autotest_common.sh@931 -- # uname 00:08:03.752 12:02:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:03.752 12:02:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1299959 00:08:04.014 12:02:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:04.014 12:02:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:04.014 12:02:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1299959' 00:08:04.014 killing process with pid 1299959 00:08:04.014 12:02:16 -- common/autotest_common.sh@945 -- # kill 1299959 00:08:04.014 12:02:16 -- common/autotest_common.sh@950 -- # wait 1299959 00:08:04.014 12:02:17 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:04.014 00:08:04.014 real 0m11.718s 00:08:04.014 user 0m46.230s 00:08:04.014 sys 0m1.010s 00:08:04.014 12:02:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.014 12:02:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.014 ************************************ 00:08:04.014 END TEST nvmf_filesystem_in_capsule 00:08:04.014 ************************************ 00:08:04.274 12:02:17 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:04.274 12:02:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:04.274 12:02:17 -- nvmf/common.sh@116 -- # sync 00:08:04.274 12:02:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:04.274 12:02:17 -- nvmf/common.sh@119 -- # set +e 00:08:04.274 12:02:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:04.274 12:02:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:04.274 rmmod nvme_tcp 00:08:04.274 rmmod nvme_fabrics 00:08:04.274 rmmod nvme_keyring 00:08:04.274 12:02:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:04.274 12:02:17 -- nvmf/common.sh@123 -- # set -e 00:08:04.274 12:02:17 -- nvmf/common.sh@124 -- # return 0 00:08:04.274 12:02:17 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:04.274 12:02:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:04.274 12:02:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:04.274 12:02:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:04.274 12:02:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.274 12:02:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:04.274 12:02:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.274 12:02:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.274 12:02:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.187 12:02:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:06.187 00:08:06.187 real 0m34.174s 00:08:06.187 user 1m38.673s 00:08:06.187 sys 0m7.476s 00:08:06.187 12:02:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.187 12:02:19 -- common/autotest_common.sh@10 -- # set +x 00:08:06.187 ************************************ 00:08:06.187 END TEST nvmf_filesystem 00:08:06.187 ************************************ 00:08:06.449 12:02:19 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:06.449 12:02:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:06.449 12:02:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.449 12:02:19 -- common/autotest_common.sh@10 -- # set +x 00:08:06.449 ************************************ 00:08:06.449 START TEST nvmf_discovery 00:08:06.449 ************************************ 00:08:06.449 12:02:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:06.449 * Looking for test storage... 00:08:06.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.449 12:02:19 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.449 12:02:19 -- nvmf/common.sh@7 -- # uname -s 00:08:06.449 12:02:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.449 12:02:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.449 12:02:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.449 12:02:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.449 12:02:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.449 12:02:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.449 12:02:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.449 12:02:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.449 12:02:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.449 12:02:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.449 12:02:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:06.449 12:02:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:06.449 12:02:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.449 12:02:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.449 12:02:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.449 12:02:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.449 12:02:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.449 12:02:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.449 12:02:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.449 12:02:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.449 12:02:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.449 12:02:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.449 12:02:19 -- paths/export.sh@5 -- # export PATH 00:08:06.449 12:02:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.449 12:02:19 -- nvmf/common.sh@46 -- # : 0 00:08:06.449 12:02:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:06.449 12:02:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:06.449 12:02:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:06.449 12:02:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.449 12:02:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.449 12:02:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:06.449 12:02:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:06.449 12:02:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:06.449 12:02:19 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:06.449 12:02:19 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:06.449 12:02:19 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:06.449 12:02:19 -- target/discovery.sh@15 -- # hash nvme 00:08:06.449 12:02:19 -- target/discovery.sh@20 -- # nvmftestinit 00:08:06.449 12:02:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:06.449 12:02:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.449 12:02:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:06.449 12:02:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:06.449 12:02:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:06.449 12:02:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.449 12:02:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.449 12:02:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.449 12:02:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:06.449 12:02:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:06.449 12:02:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:06.449 12:02:19 -- common/autotest_common.sh@10 -- # set +x 00:08:13.036 12:02:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:13.036 12:02:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:13.036 12:02:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:13.036 12:02:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:13.036 12:02:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:13.036 12:02:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:13.036 12:02:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:13.036 12:02:26 -- nvmf/common.sh@294 -- # net_devs=() 00:08:13.036 12:02:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:13.036 12:02:26 -- nvmf/common.sh@295 -- # e810=() 00:08:13.036 12:02:26 -- nvmf/common.sh@295 -- # local -ga e810 00:08:13.036 12:02:26 -- nvmf/common.sh@296 -- # x722=() 00:08:13.036 12:02:26 -- nvmf/common.sh@296 -- # local -ga x722 00:08:13.036 12:02:26 -- nvmf/common.sh@297 -- # mlx=() 00:08:13.036 12:02:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:13.036 12:02:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.036 12:02:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:13.036 12:02:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:13.036 12:02:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:13.036 12:02:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:13.036 12:02:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:13.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:13.036 12:02:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:13.036 12:02:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:13.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:13.036 12:02:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:13.036 12:02:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:13.036 12:02:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.036 12:02:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:13.036 12:02:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.036 12:02:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:13.036 Found net devices under 0000:31:00.0: cvl_0_0 00:08:13.036 12:02:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.036 12:02:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:13.036 12:02:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.036 12:02:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:13.036 12:02:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.036 12:02:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:13.036 Found net devices under 0000:31:00.1: cvl_0_1 00:08:13.036 12:02:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.036 12:02:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:13.036 12:02:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:13.036 12:02:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:13.036 12:02:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:13.036 12:02:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.036 12:02:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.036 12:02:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.036 12:02:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:13.036 12:02:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.036 12:02:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.036 12:02:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:13.036 12:02:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.036 12:02:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.036 12:02:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:13.036 12:02:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:13.036 12:02:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.036 12:02:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.297 12:02:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.297 12:02:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.297 12:02:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:13.297 12:02:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.297 12:02:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.297 12:02:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.297 12:02:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:13.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:08:13.297 00:08:13.297 --- 10.0.0.2 ping statistics --- 00:08:13.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.297 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:08:13.297 12:02:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:08:13.297 00:08:13.297 --- 10.0.0.1 ping statistics --- 00:08:13.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.297 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:13.297 12:02:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.297 12:02:26 -- nvmf/common.sh@410 -- # return 0 00:08:13.297 12:02:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:13.297 12:02:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.297 12:02:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:13.297 12:02:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:13.297 12:02:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.297 12:02:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:13.297 12:02:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:13.558 12:02:26 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:13.558 12:02:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:13.558 12:02:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:13.558 12:02:26 -- common/autotest_common.sh@10 -- # set +x 00:08:13.558 12:02:26 -- nvmf/common.sh@469 -- # nvmfpid=1306944 00:08:13.558 12:02:26 -- nvmf/common.sh@470 -- # waitforlisten 1306944 00:08:13.558 12:02:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.558 12:02:26 -- common/autotest_common.sh@819 -- # '[' -z 1306944 ']' 00:08:13.558 12:02:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.558 12:02:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:13.558 12:02:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.558 12:02:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:13.558 12:02:26 -- common/autotest_common.sh@10 -- # set +x 00:08:13.558 [2024-06-11 12:02:26.391217] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:13.558 [2024-06-11 12:02:26.391264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.558 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.558 [2024-06-11 12:02:26.457683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.558 [2024-06-11 12:02:26.487865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:13.558 [2024-06-11 12:02:26.487998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.558 [2024-06-11 12:02:26.488009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.558 [2024-06-11 12:02:26.488027] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.558 [2024-06-11 12:02:26.488068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.558 [2024-06-11 12:02:26.488137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.558 [2024-06-11 12:02:26.488400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.558 [2024-06-11 12:02:26.488402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.128 12:02:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.128 12:02:27 -- common/autotest_common.sh@852 -- # return 0 00:08:14.128 12:02:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:14.128 12:02:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.128 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 12:02:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.388 12:02:27 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.388 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.388 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 [2024-06-11 12:02:27.202349] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.388 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.388 12:02:27 -- target/discovery.sh@26 -- # seq 1 4 00:08:14.388 12:02:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.388 12:02:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:14.388 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.388 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 Null1 00:08:14.388 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.388 12:02:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.388 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.388 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.388 12:02:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:14.388 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.388 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.388 12:02:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.388 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.388 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 [2024-06-11 12:02:27.255846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.388 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.388 12:02:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.388 12:02:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 Null2 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.389 12:02:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 Null3 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.389 12:02:27 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 Null4 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.389 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.389 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.389 12:02:27 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:14.651 00:08:14.651 Discovery Log Number of Records 6, Generation counter 6 00:08:14.651 =====Discovery Log Entry 0====== 00:08:14.651 trtype: tcp 00:08:14.652 adrfam: ipv4 00:08:14.652 subtype: current discovery subsystem 00:08:14.652 treq: not required 00:08:14.652 portid: 0 00:08:14.652 trsvcid: 4420 00:08:14.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.652 traddr: 10.0.0.2 00:08:14.652 eflags: explicit discovery connections, duplicate discovery information 00:08:14.652 sectype: none 00:08:14.652 =====Discovery Log Entry 1====== 00:08:14.652 trtype: tcp 00:08:14.652 adrfam: ipv4 00:08:14.652 subtype: nvme subsystem 00:08:14.652 treq: not required 00:08:14.652 portid: 0 00:08:14.652 trsvcid: 4420 00:08:14.652 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:14.652 traddr: 10.0.0.2 00:08:14.652 eflags: none 00:08:14.652 sectype: none 00:08:14.652 =====Discovery Log Entry 2====== 00:08:14.652 trtype: tcp 00:08:14.652 adrfam: ipv4 00:08:14.652 subtype: nvme subsystem 00:08:14.652 treq: not required 00:08:14.652 portid: 0 00:08:14.652 trsvcid: 4420 00:08:14.652 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:14.652 traddr: 10.0.0.2 00:08:14.652 eflags: none 00:08:14.652 sectype: none 00:08:14.652 =====Discovery Log Entry 3====== 00:08:14.652 trtype: tcp 00:08:14.652 adrfam: ipv4 00:08:14.652 subtype: nvme subsystem 00:08:14.652 treq: not required 00:08:14.652 portid: 0 00:08:14.652 trsvcid: 4420 00:08:14.652 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:14.652 traddr: 10.0.0.2 00:08:14.652 eflags: none 00:08:14.652 sectype: none 00:08:14.652 =====Discovery Log Entry 4====== 00:08:14.652 trtype: tcp 00:08:14.652 adrfam: ipv4 00:08:14.652 subtype: nvme subsystem 00:08:14.652 treq: not required 00:08:14.652 portid: 0 00:08:14.652 trsvcid: 4420 00:08:14.652 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:14.652 traddr: 10.0.0.2 00:08:14.652 eflags: none 00:08:14.652 sectype: none 00:08:14.652 =====Discovery Log Entry 5====== 00:08:14.652 trtype: tcp 00:08:14.652 adrfam: ipv4 00:08:14.652 subtype: discovery subsystem referral 00:08:14.652 treq: not required 00:08:14.652 portid: 0 00:08:14.652 trsvcid: 4430 00:08:14.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.652 traddr: 10.0.0.2 00:08:14.652 eflags: none 00:08:14.652 sectype: none 00:08:14.652 12:02:27 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:14.652 Perform nvmf subsystem discovery via RPC 00:08:14.652 12:02:27 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 [2024-06-11 12:02:27.572765] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:14.652 [ 00:08:14.652 { 00:08:14.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:14.652 "subtype": "Discovery", 00:08:14.652 "listen_addresses": [ 00:08:14.652 { 00:08:14.652 "transport": "TCP", 00:08:14.652 "trtype": "TCP", 00:08:14.652 "adrfam": "IPv4", 00:08:14.652 "traddr": "10.0.0.2", 00:08:14.652 "trsvcid": "4420" 00:08:14.652 } 00:08:14.652 ], 00:08:14.652 "allow_any_host": true, 00:08:14.652 "hosts": [] 00:08:14.652 }, 00:08:14.652 { 00:08:14.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.652 "subtype": "NVMe", 00:08:14.652 "listen_addresses": [ 00:08:14.652 { 00:08:14.652 "transport": "TCP", 00:08:14.652 "trtype": "TCP", 00:08:14.652 "adrfam": "IPv4", 00:08:14.652 "traddr": "10.0.0.2", 00:08:14.652 "trsvcid": "4420" 00:08:14.652 } 00:08:14.652 ], 00:08:14.652 "allow_any_host": true, 00:08:14.652 "hosts": [], 00:08:14.652 "serial_number": "SPDK00000000000001", 00:08:14.652 "model_number": "SPDK bdev Controller", 00:08:14.652 "max_namespaces": 32, 00:08:14.652 "min_cntlid": 1, 00:08:14.652 "max_cntlid": 65519, 00:08:14.652 "namespaces": [ 00:08:14.652 { 00:08:14.652 "nsid": 1, 00:08:14.652 "bdev_name": "Null1", 00:08:14.652 "name": "Null1", 00:08:14.652 "nguid": "31185A82775F4B8E9A26DB22061735B2", 00:08:14.652 "uuid": "31185a82-775f-4b8e-9a26-db22061735b2" 00:08:14.652 } 00:08:14.652 ] 00:08:14.652 }, 00:08:14.652 { 00:08:14.652 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:14.652 "subtype": "NVMe", 00:08:14.652 "listen_addresses": [ 00:08:14.652 { 00:08:14.652 "transport": "TCP", 00:08:14.652 "trtype": "TCP", 00:08:14.652 "adrfam": "IPv4", 00:08:14.652 "traddr": "10.0.0.2", 00:08:14.652 "trsvcid": "4420" 00:08:14.652 } 00:08:14.652 ], 00:08:14.652 "allow_any_host": true, 00:08:14.652 "hosts": [], 00:08:14.652 "serial_number": "SPDK00000000000002", 00:08:14.652 "model_number": "SPDK bdev Controller", 00:08:14.652 "max_namespaces": 32, 00:08:14.652 "min_cntlid": 1, 00:08:14.652 "max_cntlid": 65519, 00:08:14.652 "namespaces": [ 00:08:14.652 { 00:08:14.652 "nsid": 1, 00:08:14.652 "bdev_name": "Null2", 00:08:14.652 "name": "Null2", 00:08:14.652 "nguid": "4CE8B7A2410E4331AFFF65E1C833F670", 00:08:14.652 "uuid": "4ce8b7a2-410e-4331-afff-65e1c833f670" 00:08:14.652 } 00:08:14.652 ] 00:08:14.652 }, 00:08:14.652 { 00:08:14.652 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:14.652 "subtype": "NVMe", 00:08:14.652 "listen_addresses": [ 00:08:14.652 { 00:08:14.652 "transport": "TCP", 00:08:14.652 "trtype": "TCP", 00:08:14.652 "adrfam": "IPv4", 00:08:14.652 "traddr": "10.0.0.2", 00:08:14.652 "trsvcid": "4420" 00:08:14.652 } 00:08:14.652 ], 00:08:14.652 "allow_any_host": true, 00:08:14.652 "hosts": [], 00:08:14.652 "serial_number": "SPDK00000000000003", 00:08:14.652 "model_number": "SPDK bdev Controller", 00:08:14.652 "max_namespaces": 32, 00:08:14.652 "min_cntlid": 1, 00:08:14.652 "max_cntlid": 65519, 00:08:14.652 "namespaces": [ 00:08:14.652 { 00:08:14.652 "nsid": 1, 00:08:14.652 "bdev_name": "Null3", 00:08:14.652 "name": "Null3", 00:08:14.652 "nguid": "BD14C15891D449ECA00DC8E8090A086C", 00:08:14.652 "uuid": "bd14c158-91d4-49ec-a00d-c8e8090a086c" 00:08:14.652 } 00:08:14.652 ] 00:08:14.652 }, 00:08:14.652 { 00:08:14.652 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:14.652 "subtype": "NVMe", 00:08:14.652 "listen_addresses": [ 00:08:14.652 { 00:08:14.652 "transport": "TCP", 00:08:14.652 "trtype": "TCP", 00:08:14.652 "adrfam": "IPv4", 00:08:14.652 "traddr": "10.0.0.2", 00:08:14.652 "trsvcid": "4420" 00:08:14.652 } 00:08:14.652 ], 00:08:14.652 "allow_any_host": true, 00:08:14.652 "hosts": [], 00:08:14.652 "serial_number": "SPDK00000000000004", 00:08:14.652 "model_number": "SPDK bdev Controller", 00:08:14.652 "max_namespaces": 32, 00:08:14.652 "min_cntlid": 1, 00:08:14.652 "max_cntlid": 65519, 00:08:14.652 "namespaces": [ 00:08:14.652 { 00:08:14.652 "nsid": 1, 00:08:14.652 "bdev_name": "Null4", 00:08:14.652 "name": "Null4", 00:08:14.652 "nguid": "15A451DDDC25426FABBEA5ED91A37EFA", 00:08:14.652 "uuid": "15a451dd-dc25-426f-abbe-a5ed91a37efa" 00:08:14.652 } 00:08:14.652 ] 00:08:14.652 } 00:08:14.652 ] 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@42 -- # seq 1 4 00:08:14.652 12:02:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.652 12:02:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.652 12:02:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.652 12:02:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.652 12:02:27 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.652 12:02:27 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:14.652 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.652 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.653 12:02:27 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.653 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.653 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.914 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.915 12:02:27 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:14.915 12:02:27 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:14.915 12:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.915 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.915 12:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.915 12:02:27 -- target/discovery.sh@49 -- # check_bdevs= 00:08:14.915 12:02:27 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:14.915 12:02:27 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:14.915 12:02:27 -- target/discovery.sh@57 -- # nvmftestfini 00:08:14.915 12:02:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:14.915 12:02:27 -- nvmf/common.sh@116 -- # sync 00:08:14.915 12:02:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:14.915 12:02:27 -- nvmf/common.sh@119 -- # set +e 00:08:14.915 12:02:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:14.915 12:02:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:14.915 rmmod nvme_tcp 00:08:14.915 rmmod nvme_fabrics 00:08:14.915 rmmod nvme_keyring 00:08:14.915 12:02:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:14.915 12:02:27 -- nvmf/common.sh@123 -- # set -e 00:08:14.915 12:02:27 -- nvmf/common.sh@124 -- # return 0 00:08:14.915 12:02:27 -- nvmf/common.sh@477 -- # '[' -n 1306944 ']' 00:08:14.915 12:02:27 -- nvmf/common.sh@478 -- # killprocess 1306944 00:08:14.915 12:02:27 -- common/autotest_common.sh@926 -- # '[' -z 1306944 ']' 00:08:14.915 12:02:27 -- common/autotest_common.sh@930 -- # kill -0 1306944 00:08:14.915 12:02:27 -- common/autotest_common.sh@931 -- # uname 00:08:14.915 12:02:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:14.915 12:02:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1306944 00:08:14.915 12:02:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:14.915 12:02:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:14.915 12:02:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1306944' 00:08:14.915 killing process with pid 1306944 00:08:14.915 12:02:27 -- common/autotest_common.sh@945 -- # kill 1306944 00:08:14.915 [2024-06-11 12:02:27.868220] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:14.915 12:02:27 -- common/autotest_common.sh@950 -- # wait 1306944 00:08:15.176 12:02:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:15.176 12:02:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:15.176 12:02:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:15.176 12:02:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.176 12:02:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:15.176 12:02:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.176 12:02:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.176 12:02:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.090 12:02:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:17.090 00:08:17.090 real 0m10.798s 00:08:17.090 user 0m8.162s 00:08:17.090 sys 0m5.405s 00:08:17.090 12:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.090 12:02:30 -- common/autotest_common.sh@10 -- # set +x 00:08:17.090 ************************************ 00:08:17.090 END TEST nvmf_discovery 00:08:17.090 ************************************ 00:08:17.090 12:02:30 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.090 12:02:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:17.090 12:02:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.090 12:02:30 -- common/autotest_common.sh@10 -- # set +x 00:08:17.090 ************************************ 00:08:17.090 START TEST nvmf_referrals 00:08:17.090 ************************************ 00:08:17.090 12:02:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.352 * Looking for test storage... 00:08:17.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.352 12:02:30 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.352 12:02:30 -- nvmf/common.sh@7 -- # uname -s 00:08:17.352 12:02:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.352 12:02:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.352 12:02:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.352 12:02:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.352 12:02:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.352 12:02:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.352 12:02:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.352 12:02:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.352 12:02:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.352 12:02:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.352 12:02:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:17.352 12:02:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:17.352 12:02:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.352 12:02:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.352 12:02:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.352 12:02:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.352 12:02:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.352 12:02:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.352 12:02:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.352 12:02:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.352 12:02:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.352 12:02:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.352 12:02:30 -- paths/export.sh@5 -- # export PATH 00:08:17.352 12:02:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.352 12:02:30 -- nvmf/common.sh@46 -- # : 0 00:08:17.352 12:02:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:17.352 12:02:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:17.352 12:02:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:17.352 12:02:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.352 12:02:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.352 12:02:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:17.352 12:02:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:17.352 12:02:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:17.352 12:02:30 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:17.352 12:02:30 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:17.352 12:02:30 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:17.352 12:02:30 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:17.352 12:02:30 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:17.352 12:02:30 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:17.352 12:02:30 -- target/referrals.sh@37 -- # nvmftestinit 00:08:17.353 12:02:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:17.353 12:02:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.353 12:02:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:17.353 12:02:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:17.353 12:02:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:17.353 12:02:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.353 12:02:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.353 12:02:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.353 12:02:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:17.353 12:02:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:17.353 12:02:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:17.353 12:02:30 -- common/autotest_common.sh@10 -- # set +x 00:08:25.591 12:02:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:25.591 12:02:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:25.591 12:02:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:25.591 12:02:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:25.591 12:02:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:25.591 12:02:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:25.591 12:02:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:25.591 12:02:37 -- nvmf/common.sh@294 -- # net_devs=() 00:08:25.591 12:02:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:25.591 12:02:37 -- nvmf/common.sh@295 -- # e810=() 00:08:25.591 12:02:37 -- nvmf/common.sh@295 -- # local -ga e810 00:08:25.591 12:02:37 -- nvmf/common.sh@296 -- # x722=() 00:08:25.591 12:02:37 -- nvmf/common.sh@296 -- # local -ga x722 00:08:25.591 12:02:37 -- nvmf/common.sh@297 -- # mlx=() 00:08:25.591 12:02:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:25.591 12:02:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.591 12:02:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:25.591 12:02:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:25.591 12:02:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:25.591 12:02:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:25.591 12:02:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:25.591 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:25.591 12:02:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:25.591 12:02:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:25.591 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:25.591 12:02:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:25.591 12:02:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:25.591 12:02:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:25.591 12:02:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.592 12:02:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:25.592 12:02:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.592 12:02:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:25.592 Found net devices under 0000:31:00.0: cvl_0_0 00:08:25.592 12:02:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.592 12:02:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:25.592 12:02:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.592 12:02:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:25.592 12:02:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.592 12:02:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:25.592 Found net devices under 0000:31:00.1: cvl_0_1 00:08:25.592 12:02:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.592 12:02:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:25.592 12:02:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:25.592 12:02:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:25.592 12:02:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:25.592 12:02:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:25.592 12:02:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.592 12:02:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.592 12:02:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.592 12:02:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:25.592 12:02:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.592 12:02:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.592 12:02:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:25.592 12:02:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.592 12:02:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.592 12:02:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:25.592 12:02:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:25.592 12:02:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.592 12:02:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.592 12:02:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.592 12:02:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.592 12:02:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:25.592 12:02:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.592 12:02:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.592 12:02:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.592 12:02:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:25.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:08:25.592 00:08:25.592 --- 10.0.0.2 ping statistics --- 00:08:25.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.592 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:08:25.592 12:02:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:25.592 00:08:25.592 --- 10.0.0.1 ping statistics --- 00:08:25.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.592 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:25.592 12:02:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.592 12:02:37 -- nvmf/common.sh@410 -- # return 0 00:08:25.592 12:02:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:25.592 12:02:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.592 12:02:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:25.592 12:02:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:25.592 12:02:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.592 12:02:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:25.592 12:02:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:25.592 12:02:37 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:25.592 12:02:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:25.592 12:02:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:25.592 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 12:02:37 -- nvmf/common.sh@469 -- # nvmfpid=1311409 00:08:25.592 12:02:37 -- nvmf/common.sh@470 -- # waitforlisten 1311409 00:08:25.592 12:02:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.592 12:02:37 -- common/autotest_common.sh@819 -- # '[' -z 1311409 ']' 00:08:25.592 12:02:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.592 12:02:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.592 12:02:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.592 12:02:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.592 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 [2024-06-11 12:02:37.636913] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:25.592 [2024-06-11 12:02:37.636983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.592 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.592 [2024-06-11 12:02:37.708523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.592 [2024-06-11 12:02:37.745788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.592 [2024-06-11 12:02:37.745937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.592 [2024-06-11 12:02:37.745947] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.592 [2024-06-11 12:02:37.745955] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.592 [2024-06-11 12:02:37.746111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.592 [2024-06-11 12:02:37.746225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.592 [2024-06-11 12:02:37.746388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.592 [2024-06-11 12:02:37.746389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.592 12:02:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:25.592 12:02:38 -- common/autotest_common.sh@852 -- # return 0 00:08:25.592 12:02:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:25.592 12:02:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 12:02:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.592 12:02:38 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.592 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 [2024-06-11 12:02:38.458319] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.592 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:25.592 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 [2024-06-11 12:02:38.474485] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:25.592 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:25.592 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:25.592 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:25.592 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.592 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.592 12:02:38 -- target/referrals.sh@48 -- # jq length 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:25.592 12:02:38 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:25.592 12:02:38 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.592 12:02:38 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.592 12:02:38 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.592 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.592 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 12:02:38 -- target/referrals.sh@21 -- # sort 00:08:25.592 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:25.592 12:02:38 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:25.592 12:02:38 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:25.592 12:02:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.592 12:02:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.592 12:02:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.592 12:02:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.592 12:02:38 -- target/referrals.sh@26 -- # sort 00:08:25.853 12:02:38 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:25.853 12:02:38 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:25.853 12:02:38 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:25.853 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.853 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.853 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.853 12:02:38 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:25.853 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.853 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.853 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.853 12:02:38 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:25.853 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.853 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.854 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.854 12:02:38 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.854 12:02:38 -- target/referrals.sh@56 -- # jq length 00:08:25.854 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.854 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.854 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.854 12:02:38 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:25.854 12:02:38 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:25.854 12:02:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.854 12:02:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.854 12:02:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.854 12:02:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.854 12:02:38 -- target/referrals.sh@26 -- # sort 00:08:26.115 12:02:38 -- target/referrals.sh@26 -- # echo 00:08:26.115 12:02:38 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:26.115 12:02:38 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:26.115 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.115 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.115 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.115 12:02:38 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.115 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.115 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.115 12:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.115 12:02:38 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:26.115 12:02:38 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.115 12:02:38 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.115 12:02:38 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.115 12:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.115 12:02:38 -- target/referrals.sh@21 -- # sort 00:08:26.115 12:02:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.115 12:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.115 12:02:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:26.115 12:02:39 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.115 12:02:39 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:26.115 12:02:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.115 12:02:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.115 12:02:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.115 12:02:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.115 12:02:39 -- target/referrals.sh@26 -- # sort 00:08:26.115 12:02:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:26.115 12:02:39 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.115 12:02:39 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:26.115 12:02:39 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:26.115 12:02:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.115 12:02:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.115 12:02:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:26.375 12:02:39 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:26.375 12:02:39 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:26.375 12:02:39 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:26.375 12:02:39 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:26.375 12:02:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.375 12:02:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:26.375 12:02:39 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:26.375 12:02:39 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.375 12:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.375 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.375 12:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.375 12:02:39 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:26.375 12:02:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.375 12:02:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.375 12:02:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.375 12:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.375 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.375 12:02:39 -- target/referrals.sh@21 -- # sort 00:08:26.375 12:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.635 12:02:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:26.635 12:02:39 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.635 12:02:39 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:26.636 12:02:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.636 12:02:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.636 12:02:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.636 12:02:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.636 12:02:39 -- target/referrals.sh@26 -- # sort 00:08:26.636 12:02:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:26.636 12:02:39 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.636 12:02:39 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:26.636 12:02:39 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:26.636 12:02:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.636 12:02:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.636 12:02:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:26.896 12:02:39 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:26.896 12:02:39 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:26.896 12:02:39 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:26.896 12:02:39 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:26.896 12:02:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.896 12:02:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:26.896 12:02:39 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:26.896 12:02:39 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:26.896 12:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.896 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.896 12:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.896 12:02:39 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.896 12:02:39 -- target/referrals.sh@82 -- # jq length 00:08:26.896 12:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.896 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.896 12:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.896 12:02:39 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:26.896 12:02:39 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:26.896 12:02:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.896 12:02:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.896 12:02:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.897 12:02:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.897 12:02:39 -- target/referrals.sh@26 -- # sort 00:08:27.157 12:02:39 -- target/referrals.sh@26 -- # echo 00:08:27.157 12:02:39 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:27.157 12:02:39 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:27.157 12:02:39 -- target/referrals.sh@86 -- # nvmftestfini 00:08:27.157 12:02:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:27.157 12:02:39 -- nvmf/common.sh@116 -- # sync 00:08:27.157 12:02:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:27.157 12:02:39 -- nvmf/common.sh@119 -- # set +e 00:08:27.157 12:02:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:27.157 12:02:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:27.157 rmmod nvme_tcp 00:08:27.157 rmmod nvme_fabrics 00:08:27.157 rmmod nvme_keyring 00:08:27.157 12:02:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:27.157 12:02:40 -- nvmf/common.sh@123 -- # set -e 00:08:27.157 12:02:40 -- nvmf/common.sh@124 -- # return 0 00:08:27.157 12:02:40 -- nvmf/common.sh@477 -- # '[' -n 1311409 ']' 00:08:27.157 12:02:40 -- nvmf/common.sh@478 -- # killprocess 1311409 00:08:27.157 12:02:40 -- common/autotest_common.sh@926 -- # '[' -z 1311409 ']' 00:08:27.157 12:02:40 -- common/autotest_common.sh@930 -- # kill -0 1311409 00:08:27.157 12:02:40 -- common/autotest_common.sh@931 -- # uname 00:08:27.157 12:02:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:27.157 12:02:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1311409 00:08:27.157 12:02:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:27.157 12:02:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:27.157 12:02:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1311409' 00:08:27.157 killing process with pid 1311409 00:08:27.157 12:02:40 -- common/autotest_common.sh@945 -- # kill 1311409 00:08:27.157 12:02:40 -- common/autotest_common.sh@950 -- # wait 1311409 00:08:27.417 12:02:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:27.417 12:02:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:27.418 12:02:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:27.418 12:02:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.418 12:02:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:27.418 12:02:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.418 12:02:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.418 12:02:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.329 12:02:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:29.329 00:08:29.329 real 0m12.189s 00:08:29.329 user 0m12.858s 00:08:29.329 sys 0m5.964s 00:08:29.329 12:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.329 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.329 ************************************ 00:08:29.329 END TEST nvmf_referrals 00:08:29.329 ************************************ 00:08:29.329 12:02:42 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:29.329 12:02:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:29.329 12:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.329 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.329 ************************************ 00:08:29.329 START TEST nvmf_connect_disconnect 00:08:29.329 ************************************ 00:08:29.329 12:02:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:29.591 * Looking for test storage... 00:08:29.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.591 12:02:42 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.591 12:02:42 -- nvmf/common.sh@7 -- # uname -s 00:08:29.591 12:02:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.591 12:02:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.591 12:02:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.591 12:02:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.591 12:02:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.591 12:02:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.591 12:02:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.591 12:02:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.591 12:02:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.591 12:02:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.591 12:02:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:29.591 12:02:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:29.591 12:02:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.591 12:02:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.591 12:02:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.591 12:02:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.591 12:02:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.591 12:02:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.591 12:02:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.591 12:02:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.591 12:02:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.591 12:02:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.591 12:02:42 -- paths/export.sh@5 -- # export PATH 00:08:29.591 12:02:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.591 12:02:42 -- nvmf/common.sh@46 -- # : 0 00:08:29.591 12:02:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:29.591 12:02:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:29.591 12:02:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:29.591 12:02:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.591 12:02:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.591 12:02:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:29.591 12:02:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:29.592 12:02:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:29.592 12:02:42 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.592 12:02:42 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.592 12:02:42 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:29.592 12:02:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:29.592 12:02:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.592 12:02:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:29.592 12:02:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:29.592 12:02:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:29.592 12:02:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.592 12:02:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.592 12:02:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.592 12:02:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:29.592 12:02:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:29.592 12:02:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:29.592 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:08:37.734 12:02:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:37.734 12:02:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:37.734 12:02:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:37.734 12:02:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:37.734 12:02:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:37.734 12:02:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:37.734 12:02:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:37.734 12:02:49 -- nvmf/common.sh@294 -- # net_devs=() 00:08:37.734 12:02:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:37.734 12:02:49 -- nvmf/common.sh@295 -- # e810=() 00:08:37.734 12:02:49 -- nvmf/common.sh@295 -- # local -ga e810 00:08:37.734 12:02:49 -- nvmf/common.sh@296 -- # x722=() 00:08:37.734 12:02:49 -- nvmf/common.sh@296 -- # local -ga x722 00:08:37.734 12:02:49 -- nvmf/common.sh@297 -- # mlx=() 00:08:37.734 12:02:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:37.734 12:02:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.734 12:02:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:37.734 12:02:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:37.734 12:02:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:37.734 12:02:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:37.734 12:02:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:37.734 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:37.734 12:02:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:37.734 12:02:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:37.734 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:37.734 12:02:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:37.734 12:02:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:37.734 12:02:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.734 12:02:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:37.734 12:02:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.734 12:02:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:37.734 Found net devices under 0000:31:00.0: cvl_0_0 00:08:37.734 12:02:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.734 12:02:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:37.734 12:02:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.734 12:02:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:37.734 12:02:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.734 12:02:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:37.734 Found net devices under 0000:31:00.1: cvl_0_1 00:08:37.734 12:02:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.734 12:02:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:37.734 12:02:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:37.734 12:02:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:37.734 12:02:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:37.734 12:02:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.734 12:02:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.734 12:02:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.734 12:02:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:37.734 12:02:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.734 12:02:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.734 12:02:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:37.734 12:02:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.734 12:02:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.734 12:02:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:37.734 12:02:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:37.734 12:02:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.734 12:02:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.734 12:02:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.734 12:02:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.734 12:02:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:37.734 12:02:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.734 12:02:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.734 12:02:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.734 12:02:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:37.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:08:37.734 00:08:37.734 --- 10.0.0.2 ping statistics --- 00:08:37.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.734 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:08:37.734 12:02:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:08:37.734 00:08:37.734 --- 10.0.0.1 ping statistics --- 00:08:37.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.734 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:08:37.735 12:02:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.735 12:02:49 -- nvmf/common.sh@410 -- # return 0 00:08:37.735 12:02:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:37.735 12:02:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.735 12:02:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:37.735 12:02:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:37.735 12:02:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.735 12:02:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:37.735 12:02:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:37.735 12:02:49 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:37.735 12:02:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:37.735 12:02:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.735 12:02:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 12:02:49 -- nvmf/common.sh@469 -- # nvmfpid=1316266 00:08:37.735 12:02:49 -- nvmf/common.sh@470 -- # waitforlisten 1316266 00:08:37.735 12:02:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.735 12:02:49 -- common/autotest_common.sh@819 -- # '[' -z 1316266 ']' 00:08:37.735 12:02:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.735 12:02:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:37.735 12:02:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.735 12:02:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:37.735 12:02:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 [2024-06-11 12:02:49.814292] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:37.735 [2024-06-11 12:02:49.814337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.735 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.735 [2024-06-11 12:02:49.880126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.735 [2024-06-11 12:02:49.909285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:37.735 [2024-06-11 12:02:49.909419] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.735 [2024-06-11 12:02:49.909429] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.735 [2024-06-11 12:02:49.909437] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.735 [2024-06-11 12:02:49.909576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.735 [2024-06-11 12:02:49.909691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.735 [2024-06-11 12:02:49.909850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.735 [2024-06-11 12:02:49.909851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.735 12:02:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:37.735 12:02:50 -- common/autotest_common.sh@852 -- # return 0 00:08:37.735 12:02:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:37.735 12:02:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:37.735 12:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 12:02:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:37.735 12:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.735 12:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 [2024-06-11 12:02:50.626367] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.735 12:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:37.735 12:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.735 12:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 12:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.735 12:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.735 12:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 12:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.735 12:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.735 12:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 12:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.735 12:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.735 12:02:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 [2024-06-11 12:02:50.685713] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.735 12:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:37.735 12:02:50 -- target/connect_disconnect.sh@34 -- # set +x 00:08:40.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.279 [2024-06-11 12:02:59.909933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8498a0 is same with the state(5) to be set 00:08:47.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.251 [2024-06-11 12:06:05.149830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8250 is same with the state(5) to be set 00:11:52.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.693 12:06:39 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:26.693 12:06:39 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:26.693 12:06:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:26.693 12:06:39 -- nvmf/common.sh@116 -- # sync 00:12:26.693 12:06:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:26.693 12:06:39 -- nvmf/common.sh@119 -- # set +e 00:12:26.693 12:06:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:26.693 12:06:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:26.693 rmmod nvme_tcp 00:12:26.693 rmmod nvme_fabrics 00:12:26.693 rmmod nvme_keyring 00:12:26.953 12:06:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:26.953 12:06:39 -- nvmf/common.sh@123 -- # set -e 00:12:26.953 12:06:39 -- nvmf/common.sh@124 -- # return 0 00:12:26.953 12:06:39 -- nvmf/common.sh@477 -- # '[' -n 1316266 ']' 00:12:26.953 12:06:39 -- nvmf/common.sh@478 -- # killprocess 1316266 00:12:26.953 12:06:39 -- common/autotest_common.sh@926 -- # '[' -z 1316266 ']' 00:12:26.953 12:06:39 -- common/autotest_common.sh@930 -- # kill -0 1316266 00:12:26.953 12:06:39 -- common/autotest_common.sh@931 -- # uname 00:12:26.953 12:06:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:26.953 12:06:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1316266 00:12:26.953 12:06:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:26.953 12:06:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:26.953 12:06:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1316266' 00:12:26.953 killing process with pid 1316266 00:12:26.953 12:06:39 -- common/autotest_common.sh@945 -- # kill 1316266 00:12:26.953 12:06:39 -- common/autotest_common.sh@950 -- # wait 1316266 00:12:26.953 12:06:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:26.953 12:06:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:26.954 12:06:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:26.954 12:06:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.954 12:06:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:26.954 12:06:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.954 12:06:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.954 12:06:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.497 12:06:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:29.497 00:12:29.497 real 3m59.677s 00:12:29.497 user 15m14.227s 00:12:29.497 sys 0m19.317s 00:12:29.497 12:06:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.497 12:06:42 -- common/autotest_common.sh@10 -- # set +x 00:12:29.497 ************************************ 00:12:29.497 END TEST nvmf_connect_disconnect 00:12:29.497 ************************************ 00:12:29.497 12:06:42 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.497 12:06:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:29.497 12:06:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:29.497 12:06:42 -- common/autotest_common.sh@10 -- # set +x 00:12:29.497 ************************************ 00:12:29.497 START TEST nvmf_multitarget 00:12:29.497 ************************************ 00:12:29.497 12:06:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.497 * Looking for test storage... 00:12:29.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.497 12:06:42 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.497 12:06:42 -- nvmf/common.sh@7 -- # uname -s 00:12:29.497 12:06:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.497 12:06:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.497 12:06:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.497 12:06:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.497 12:06:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.497 12:06:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.497 12:06:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.497 12:06:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.497 12:06:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.497 12:06:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.497 12:06:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.497 12:06:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.497 12:06:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.497 12:06:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.497 12:06:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.497 12:06:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.497 12:06:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.497 12:06:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.497 12:06:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.497 12:06:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.497 12:06:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.497 12:06:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.497 12:06:42 -- paths/export.sh@5 -- # export PATH 00:12:29.497 12:06:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.497 12:06:42 -- nvmf/common.sh@46 -- # : 0 00:12:29.497 12:06:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:29.497 12:06:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:29.498 12:06:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:29.498 12:06:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.498 12:06:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.498 12:06:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:29.498 12:06:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:29.498 12:06:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:29.498 12:06:42 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:29.498 12:06:42 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:29.498 12:06:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:29.498 12:06:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.498 12:06:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:29.498 12:06:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:29.498 12:06:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:29.498 12:06:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.498 12:06:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.498 12:06:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.498 12:06:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:29.498 12:06:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:29.498 12:06:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:29.498 12:06:42 -- common/autotest_common.sh@10 -- # set +x 00:12:36.083 12:06:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:36.083 12:06:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:36.083 12:06:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:36.083 12:06:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:36.083 12:06:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:36.083 12:06:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:36.083 12:06:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:36.083 12:06:48 -- nvmf/common.sh@294 -- # net_devs=() 00:12:36.083 12:06:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:36.083 12:06:48 -- nvmf/common.sh@295 -- # e810=() 00:12:36.083 12:06:48 -- nvmf/common.sh@295 -- # local -ga e810 00:12:36.083 12:06:48 -- nvmf/common.sh@296 -- # x722=() 00:12:36.083 12:06:48 -- nvmf/common.sh@296 -- # local -ga x722 00:12:36.083 12:06:48 -- nvmf/common.sh@297 -- # mlx=() 00:12:36.083 12:06:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:36.083 12:06:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.083 12:06:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:36.083 12:06:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:36.083 12:06:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:36.083 12:06:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:36.083 12:06:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:36.083 12:06:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:36.083 12:06:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.083 12:06:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:36.083 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:36.083 12:06:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.083 12:06:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:36.083 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:36.083 12:06:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:36.083 12:06:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:36.083 12:06:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.083 12:06:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.083 12:06:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.083 12:06:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.083 12:06:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:36.083 Found net devices under 0000:31:00.0: cvl_0_0 00:12:36.083 12:06:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.084 12:06:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.084 12:06:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.084 12:06:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.084 12:06:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.084 12:06:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:36.084 Found net devices under 0000:31:00.1: cvl_0_1 00:12:36.084 12:06:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.084 12:06:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:36.084 12:06:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:36.084 12:06:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:36.084 12:06:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:36.084 12:06:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:36.084 12:06:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.084 12:06:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.084 12:06:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.084 12:06:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:36.084 12:06:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.084 12:06:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.084 12:06:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:36.084 12:06:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.084 12:06:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.084 12:06:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:36.084 12:06:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:36.084 12:06:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.084 12:06:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.345 12:06:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.345 12:06:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.345 12:06:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:36.345 12:06:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.345 12:06:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.345 12:06:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.345 12:06:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:36.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:12:36.345 00:12:36.345 --- 10.0.0.2 ping statistics --- 00:12:36.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.345 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:12:36.345 12:06:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:12:36.345 00:12:36.345 --- 10.0.0.1 ping statistics --- 00:12:36.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.345 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:36.345 12:06:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.345 12:06:49 -- nvmf/common.sh@410 -- # return 0 00:12:36.345 12:06:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:36.345 12:06:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.345 12:06:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:36.345 12:06:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:36.345 12:06:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.345 12:06:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:36.345 12:06:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:36.345 12:06:49 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:36.345 12:06:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:36.345 12:06:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:36.345 12:06:49 -- common/autotest_common.sh@10 -- # set +x 00:12:36.345 12:06:49 -- nvmf/common.sh@469 -- # nvmfpid=1368750 00:12:36.345 12:06:49 -- nvmf/common.sh@470 -- # waitforlisten 1368750 00:12:36.345 12:06:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.345 12:06:49 -- common/autotest_common.sh@819 -- # '[' -z 1368750 ']' 00:12:36.345 12:06:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.345 12:06:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:36.345 12:06:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.345 12:06:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:36.345 12:06:49 -- common/autotest_common.sh@10 -- # set +x 00:12:36.606 [2024-06-11 12:06:49.417416] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:36.606 [2024-06-11 12:06:49.417478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.606 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.606 [2024-06-11 12:06:49.489686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.606 [2024-06-11 12:06:49.527144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.606 [2024-06-11 12:06:49.527290] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.606 [2024-06-11 12:06:49.527300] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.606 [2024-06-11 12:06:49.527309] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.606 [2024-06-11 12:06:49.527461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.606 [2024-06-11 12:06:49.527604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.606 [2024-06-11 12:06:49.527765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.606 [2024-06-11 12:06:49.527766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.177 12:06:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.177 12:06:50 -- common/autotest_common.sh@852 -- # return 0 00:12:37.177 12:06:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.177 12:06:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:37.177 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:12:37.438 12:06:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.438 12:06:50 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.438 12:06:50 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.438 12:06:50 -- target/multitarget.sh@21 -- # jq length 00:12:37.438 12:06:50 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:37.438 12:06:50 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:37.438 "nvmf_tgt_1" 00:12:37.438 12:06:50 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:37.698 "nvmf_tgt_2" 00:12:37.698 12:06:50 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.698 12:06:50 -- target/multitarget.sh@28 -- # jq length 00:12:37.698 12:06:50 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:37.698 12:06:50 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:37.698 true 00:12:37.698 12:06:50 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:37.958 true 00:12:37.958 12:06:50 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.958 12:06:50 -- target/multitarget.sh@35 -- # jq length 00:12:37.958 12:06:50 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:37.958 12:06:50 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:37.958 12:06:50 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:37.958 12:06:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:37.958 12:06:50 -- nvmf/common.sh@116 -- # sync 00:12:37.958 12:06:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:37.958 12:06:50 -- nvmf/common.sh@119 -- # set +e 00:12:37.958 12:06:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:37.958 12:06:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:37.958 rmmod nvme_tcp 00:12:37.958 rmmod nvme_fabrics 00:12:37.958 rmmod nvme_keyring 00:12:37.958 12:06:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:37.958 12:06:50 -- nvmf/common.sh@123 -- # set -e 00:12:37.958 12:06:50 -- nvmf/common.sh@124 -- # return 0 00:12:37.958 12:06:50 -- nvmf/common.sh@477 -- # '[' -n 1368750 ']' 00:12:37.958 12:06:50 -- nvmf/common.sh@478 -- # killprocess 1368750 00:12:37.958 12:06:50 -- common/autotest_common.sh@926 -- # '[' -z 1368750 ']' 00:12:37.958 12:06:50 -- common/autotest_common.sh@930 -- # kill -0 1368750 00:12:37.958 12:06:50 -- common/autotest_common.sh@931 -- # uname 00:12:38.218 12:06:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:38.218 12:06:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1368750 00:12:38.218 12:06:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:38.218 12:06:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:38.218 12:06:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1368750' 00:12:38.218 killing process with pid 1368750 00:12:38.218 12:06:51 -- common/autotest_common.sh@945 -- # kill 1368750 00:12:38.218 12:06:51 -- common/autotest_common.sh@950 -- # wait 1368750 00:12:38.218 12:06:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:38.218 12:06:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:38.218 12:06:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:38.218 12:06:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.218 12:06:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:38.218 12:06:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.218 12:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.218 12:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.764 12:06:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:40.764 00:12:40.764 real 0m11.175s 00:12:40.764 user 0m9.332s 00:12:40.764 sys 0m5.685s 00:12:40.764 12:06:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.764 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:40.764 ************************************ 00:12:40.764 END TEST nvmf_multitarget 00:12:40.764 ************************************ 00:12:40.764 12:06:53 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.764 12:06:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:40.764 12:06:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:40.764 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:40.764 ************************************ 00:12:40.764 START TEST nvmf_rpc 00:12:40.764 ************************************ 00:12:40.764 12:06:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.764 * Looking for test storage... 00:12:40.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.764 12:06:53 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.764 12:06:53 -- nvmf/common.sh@7 -- # uname -s 00:12:40.764 12:06:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.764 12:06:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.764 12:06:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.764 12:06:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.764 12:06:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.764 12:06:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.764 12:06:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.764 12:06:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.764 12:06:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.764 12:06:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.764 12:06:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.764 12:06:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.764 12:06:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.764 12:06:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.764 12:06:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.764 12:06:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.764 12:06:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.764 12:06:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.764 12:06:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.764 12:06:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.764 12:06:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.764 12:06:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.764 12:06:53 -- paths/export.sh@5 -- # export PATH 00:12:40.764 12:06:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.764 12:06:53 -- nvmf/common.sh@46 -- # : 0 00:12:40.764 12:06:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:40.764 12:06:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:40.764 12:06:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:40.764 12:06:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.764 12:06:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.764 12:06:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:40.764 12:06:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:40.764 12:06:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:40.764 12:06:53 -- target/rpc.sh@11 -- # loops=5 00:12:40.764 12:06:53 -- target/rpc.sh@23 -- # nvmftestinit 00:12:40.764 12:06:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:40.764 12:06:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.764 12:06:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:40.764 12:06:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:40.764 12:06:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:40.764 12:06:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.764 12:06:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.764 12:06:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.764 12:06:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:40.764 12:06:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:40.764 12:06:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:40.764 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:47.353 12:07:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:47.353 12:07:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:47.353 12:07:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:47.353 12:07:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:47.353 12:07:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:47.353 12:07:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:47.353 12:07:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:47.353 12:07:00 -- nvmf/common.sh@294 -- # net_devs=() 00:12:47.353 12:07:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:47.353 12:07:00 -- nvmf/common.sh@295 -- # e810=() 00:12:47.353 12:07:00 -- nvmf/common.sh@295 -- # local -ga e810 00:12:47.353 12:07:00 -- nvmf/common.sh@296 -- # x722=() 00:12:47.353 12:07:00 -- nvmf/common.sh@296 -- # local -ga x722 00:12:47.353 12:07:00 -- nvmf/common.sh@297 -- # mlx=() 00:12:47.354 12:07:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:47.354 12:07:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.354 12:07:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:47.354 12:07:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:47.354 12:07:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:47.354 12:07:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:47.354 12:07:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:47.354 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:47.354 12:07:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:47.354 12:07:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:47.354 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:47.354 12:07:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:47.354 12:07:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:47.354 12:07:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.354 12:07:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:47.354 12:07:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.354 12:07:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:47.354 Found net devices under 0000:31:00.0: cvl_0_0 00:12:47.354 12:07:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.354 12:07:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:47.354 12:07:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.354 12:07:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:47.354 12:07:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.354 12:07:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:47.354 Found net devices under 0000:31:00.1: cvl_0_1 00:12:47.354 12:07:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.354 12:07:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:47.354 12:07:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:47.354 12:07:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:47.354 12:07:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:47.354 12:07:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.354 12:07:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.354 12:07:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.354 12:07:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:47.354 12:07:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.354 12:07:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.354 12:07:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:47.354 12:07:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.354 12:07:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.354 12:07:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:47.354 12:07:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:47.354 12:07:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.354 12:07:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.615 12:07:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.615 12:07:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.615 12:07:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:47.615 12:07:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.615 12:07:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.615 12:07:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.615 12:07:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:47.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:12:47.615 00:12:47.615 --- 10.0.0.2 ping statistics --- 00:12:47.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.615 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:12:47.615 12:07:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:12:47.615 00:12:47.615 --- 10.0.0.1 ping statistics --- 00:12:47.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.615 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:47.615 12:07:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.615 12:07:00 -- nvmf/common.sh@410 -- # return 0 00:12:47.615 12:07:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:47.615 12:07:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.615 12:07:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:47.615 12:07:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:47.616 12:07:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.616 12:07:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:47.616 12:07:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:47.877 12:07:00 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:47.877 12:07:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:47.877 12:07:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:47.877 12:07:00 -- common/autotest_common.sh@10 -- # set +x 00:12:47.877 12:07:00 -- nvmf/common.sh@469 -- # nvmfpid=1373513 00:12:47.877 12:07:00 -- nvmf/common.sh@470 -- # waitforlisten 1373513 00:12:47.877 12:07:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.877 12:07:00 -- common/autotest_common.sh@819 -- # '[' -z 1373513 ']' 00:12:47.877 12:07:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.877 12:07:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:47.877 12:07:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.877 12:07:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:47.877 12:07:00 -- common/autotest_common.sh@10 -- # set +x 00:12:47.877 [2024-06-11 12:07:00.719079] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:47.877 [2024-06-11 12:07:00.719137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.877 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.877 [2024-06-11 12:07:00.790893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.877 [2024-06-11 12:07:00.828484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:47.877 [2024-06-11 12:07:00.828630] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.877 [2024-06-11 12:07:00.828641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.877 [2024-06-11 12:07:00.828650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.877 [2024-06-11 12:07:00.828796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.877 [2024-06-11 12:07:00.828920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.877 [2024-06-11 12:07:00.829135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.877 [2024-06-11 12:07:00.829135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.820 12:07:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:48.820 12:07:01 -- common/autotest_common.sh@852 -- # return 0 00:12:48.820 12:07:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:48.820 12:07:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:48.820 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.820 12:07:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.820 12:07:01 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:48.820 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.820 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.820 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.820 12:07:01 -- target/rpc.sh@26 -- # stats='{ 00:12:48.820 "tick_rate": 2400000000, 00:12:48.820 "poll_groups": [ 00:12:48.820 { 00:12:48.820 "name": "nvmf_tgt_poll_group_0", 00:12:48.820 "admin_qpairs": 0, 00:12:48.820 "io_qpairs": 0, 00:12:48.820 "current_admin_qpairs": 0, 00:12:48.820 "current_io_qpairs": 0, 00:12:48.820 "pending_bdev_io": 0, 00:12:48.820 "completed_nvme_io": 0, 00:12:48.820 "transports": [] 00:12:48.820 }, 00:12:48.820 { 00:12:48.820 "name": "nvmf_tgt_poll_group_1", 00:12:48.820 "admin_qpairs": 0, 00:12:48.820 "io_qpairs": 0, 00:12:48.820 "current_admin_qpairs": 0, 00:12:48.820 "current_io_qpairs": 0, 00:12:48.820 "pending_bdev_io": 0, 00:12:48.820 "completed_nvme_io": 0, 00:12:48.820 "transports": [] 00:12:48.820 }, 00:12:48.820 { 00:12:48.820 "name": "nvmf_tgt_poll_group_2", 00:12:48.820 "admin_qpairs": 0, 00:12:48.820 "io_qpairs": 0, 00:12:48.820 "current_admin_qpairs": 0, 00:12:48.820 "current_io_qpairs": 0, 00:12:48.820 "pending_bdev_io": 0, 00:12:48.820 "completed_nvme_io": 0, 00:12:48.820 "transports": [] 00:12:48.820 }, 00:12:48.820 { 00:12:48.820 "name": "nvmf_tgt_poll_group_3", 00:12:48.820 "admin_qpairs": 0, 00:12:48.820 "io_qpairs": 0, 00:12:48.820 "current_admin_qpairs": 0, 00:12:48.820 "current_io_qpairs": 0, 00:12:48.820 "pending_bdev_io": 0, 00:12:48.820 "completed_nvme_io": 0, 00:12:48.820 "transports": [] 00:12:48.820 } 00:12:48.820 ] 00:12:48.820 }' 00:12:48.820 12:07:01 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:48.820 12:07:01 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:48.820 12:07:01 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:48.820 12:07:01 -- target/rpc.sh@15 -- # wc -l 00:12:48.820 12:07:01 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:48.820 12:07:01 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:48.820 12:07:01 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:48.820 12:07:01 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.820 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.820 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.820 [2024-06-11 12:07:01.653690] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.820 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.820 12:07:01 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:48.820 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.820 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.820 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.820 12:07:01 -- target/rpc.sh@33 -- # stats='{ 00:12:48.820 "tick_rate": 2400000000, 00:12:48.820 "poll_groups": [ 00:12:48.820 { 00:12:48.820 "name": "nvmf_tgt_poll_group_0", 00:12:48.820 "admin_qpairs": 0, 00:12:48.820 "io_qpairs": 0, 00:12:48.820 "current_admin_qpairs": 0, 00:12:48.820 "current_io_qpairs": 0, 00:12:48.820 "pending_bdev_io": 0, 00:12:48.820 "completed_nvme_io": 0, 00:12:48.820 "transports": [ 00:12:48.820 { 00:12:48.820 "trtype": "TCP" 00:12:48.820 } 00:12:48.820 ] 00:12:48.821 }, 00:12:48.821 { 00:12:48.821 "name": "nvmf_tgt_poll_group_1", 00:12:48.821 "admin_qpairs": 0, 00:12:48.821 "io_qpairs": 0, 00:12:48.821 "current_admin_qpairs": 0, 00:12:48.821 "current_io_qpairs": 0, 00:12:48.821 "pending_bdev_io": 0, 00:12:48.821 "completed_nvme_io": 0, 00:12:48.821 "transports": [ 00:12:48.821 { 00:12:48.821 "trtype": "TCP" 00:12:48.821 } 00:12:48.821 ] 00:12:48.821 }, 00:12:48.821 { 00:12:48.821 "name": "nvmf_tgt_poll_group_2", 00:12:48.821 "admin_qpairs": 0, 00:12:48.821 "io_qpairs": 0, 00:12:48.821 "current_admin_qpairs": 0, 00:12:48.821 "current_io_qpairs": 0, 00:12:48.821 "pending_bdev_io": 0, 00:12:48.821 "completed_nvme_io": 0, 00:12:48.821 "transports": [ 00:12:48.821 { 00:12:48.821 "trtype": "TCP" 00:12:48.821 } 00:12:48.821 ] 00:12:48.821 }, 00:12:48.821 { 00:12:48.821 "name": "nvmf_tgt_poll_group_3", 00:12:48.821 "admin_qpairs": 0, 00:12:48.821 "io_qpairs": 0, 00:12:48.821 "current_admin_qpairs": 0, 00:12:48.821 "current_io_qpairs": 0, 00:12:48.821 "pending_bdev_io": 0, 00:12:48.821 "completed_nvme_io": 0, 00:12:48.821 "transports": [ 00:12:48.821 { 00:12:48.821 "trtype": "TCP" 00:12:48.821 } 00:12:48.821 ] 00:12:48.821 } 00:12:48.821 ] 00:12:48.821 }' 00:12:48.821 12:07:01 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:48.821 12:07:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:48.821 12:07:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:48.821 12:07:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.821 12:07:01 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:48.821 12:07:01 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:48.821 12:07:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:48.821 12:07:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:48.821 12:07:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.821 12:07:01 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:48.821 12:07:01 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:48.821 12:07:01 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:48.821 12:07:01 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:48.821 12:07:01 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:48.821 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.821 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.821 Malloc1 00:12:48.821 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.821 12:07:01 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:48.821 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.821 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.821 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.821 12:07:01 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.821 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.821 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.821 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.821 12:07:01 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:48.821 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.821 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.821 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.821 12:07:01 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.821 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.821 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.821 [2024-06-11 12:07:01.845523] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.821 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.821 12:07:01 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:48.821 12:07:01 -- common/autotest_common.sh@640 -- # local es=0 00:12:48.821 12:07:01 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:48.821 12:07:01 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:48.821 12:07:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:48.821 12:07:01 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:49.082 12:07:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:49.082 12:07:01 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:49.082 12:07:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:49.082 12:07:01 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:49.082 12:07:01 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.082 12:07:01 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:49.082 [2024-06-11 12:07:01.872317] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:49.082 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.082 could not add new controller: failed to write to nvme-fabrics device 00:12:49.082 12:07:01 -- common/autotest_common.sh@643 -- # es=1 00:12:49.082 12:07:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:49.082 12:07:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:49.082 12:07:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:49.082 12:07:01 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:49.082 12:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.082 12:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:49.082 12:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.082 12:07:01 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.464 12:07:03 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.464 12:07:03 -- common/autotest_common.sh@1177 -- # local i=0 00:12:50.464 12:07:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.464 12:07:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:50.464 12:07:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:52.379 12:07:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:52.379 12:07:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:52.379 12:07:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.379 12:07:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:52.379 12:07:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.379 12:07:05 -- common/autotest_common.sh@1187 -- # return 0 00:12:52.379 12:07:05 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.379 12:07:05 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.379 12:07:05 -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.379 12:07:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:52.379 12:07:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.640 12:07:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:52.640 12:07:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.640 12:07:05 -- common/autotest_common.sh@1210 -- # return 0 00:12:52.640 12:07:05 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:52.640 12:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.640 12:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:52.640 12:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.640 12:07:05 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.640 12:07:05 -- common/autotest_common.sh@640 -- # local es=0 00:12:52.640 12:07:05 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.640 12:07:05 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:52.640 12:07:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:52.640 12:07:05 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:52.640 12:07:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:52.640 12:07:05 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:52.640 12:07:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:52.640 12:07:05 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:52.640 12:07:05 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.640 12:07:05 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.640 [2024-06-11 12:07:05.466644] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:52.640 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.640 could not add new controller: failed to write to nvme-fabrics device 00:12:52.640 12:07:05 -- common/autotest_common.sh@643 -- # es=1 00:12:52.640 12:07:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:52.640 12:07:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:52.640 12:07:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:52.640 12:07:05 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:52.640 12:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.640 12:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:52.640 12:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.640 12:07:05 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.025 12:07:06 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.025 12:07:06 -- common/autotest_common.sh@1177 -- # local i=0 00:12:54.025 12:07:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.025 12:07:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:54.025 12:07:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:56.570 12:07:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:56.570 12:07:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:56.570 12:07:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.570 12:07:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:56.570 12:07:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.570 12:07:09 -- common/autotest_common.sh@1187 -- # return 0 00:12:56.570 12:07:09 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.570 12:07:09 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.570 12:07:09 -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.570 12:07:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:56.570 12:07:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.570 12:07:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:56.570 12:07:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.570 12:07:09 -- common/autotest_common.sh@1210 -- # return 0 00:12:56.570 12:07:09 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.570 12:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.570 12:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.570 12:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.570 12:07:09 -- target/rpc.sh@81 -- # seq 1 5 00:12:56.570 12:07:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.570 12:07:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.570 12:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.570 12:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.570 12:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.570 12:07:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.570 12:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.570 12:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.570 [2024-06-11 12:07:09.144815] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.570 12:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.570 12:07:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.570 12:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.570 12:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.570 12:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.570 12:07:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.570 12:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.570 12:07:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.570 12:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.570 12:07:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.957 12:07:10 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.957 12:07:10 -- common/autotest_common.sh@1177 -- # local i=0 00:12:57.957 12:07:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.957 12:07:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:57.957 12:07:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:59.905 12:07:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:59.905 12:07:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:59.905 12:07:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.905 12:07:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:59.905 12:07:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.905 12:07:12 -- common/autotest_common.sh@1187 -- # return 0 00:12:59.905 12:07:12 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.905 12:07:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.905 12:07:12 -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.905 12:07:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:59.905 12:07:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.905 12:07:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:59.905 12:07:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.905 12:07:12 -- common/autotest_common.sh@1210 -- # return 0 00:12:59.905 12:07:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.905 12:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.905 12:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 12:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.905 12:07:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.905 12:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.905 12:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 12:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.905 12:07:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.905 12:07:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.905 12:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.905 12:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 12:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.905 12:07:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.905 12:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.905 12:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 [2024-06-11 12:07:12.799431] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.905 12:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.905 12:07:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.905 12:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.905 12:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 12:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.905 12:07:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.905 12:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.905 12:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 12:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.905 12:07:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.835 12:07:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.835 12:07:14 -- common/autotest_common.sh@1177 -- # local i=0 00:13:01.835 12:07:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.835 12:07:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:01.835 12:07:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:03.753 12:07:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:03.753 12:07:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:03.753 12:07:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.753 12:07:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:03.753 12:07:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.753 12:07:16 -- common/autotest_common.sh@1187 -- # return 0 00:13:03.753 12:07:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.753 12:07:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.753 12:07:16 -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.753 12:07:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:03.753 12:07:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.753 12:07:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.753 12:07:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:03.753 12:07:16 -- common/autotest_common.sh@1210 -- # return 0 00:13:03.753 12:07:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.753 12:07:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.753 12:07:16 -- common/autotest_common.sh@10 -- # set +x 00:13:03.753 12:07:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.753 12:07:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.753 12:07:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.753 12:07:16 -- common/autotest_common.sh@10 -- # set +x 00:13:03.753 12:07:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.753 12:07:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.753 12:07:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.753 12:07:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.753 12:07:16 -- common/autotest_common.sh@10 -- # set +x 00:13:03.753 12:07:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.753 12:07:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.753 12:07:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.753 12:07:16 -- common/autotest_common.sh@10 -- # set +x 00:13:03.753 [2024-06-11 12:07:16.505363] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.753 12:07:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.753 12:07:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.753 12:07:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.753 12:07:16 -- common/autotest_common.sh@10 -- # set +x 00:13:03.753 12:07:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.753 12:07:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.753 12:07:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.753 12:07:16 -- common/autotest_common.sh@10 -- # set +x 00:13:03.753 12:07:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.753 12:07:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.134 12:07:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.134 12:07:17 -- common/autotest_common.sh@1177 -- # local i=0 00:13:05.134 12:07:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.134 12:07:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:05.134 12:07:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:07.043 12:07:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:07.043 12:07:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:07.043 12:07:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.043 12:07:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:07.043 12:07:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.043 12:07:19 -- common/autotest_common.sh@1187 -- # return 0 00:13:07.043 12:07:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.303 12:07:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.303 12:07:20 -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.303 12:07:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:07.303 12:07:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.303 12:07:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:07.303 12:07:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.303 12:07:20 -- common/autotest_common.sh@1210 -- # return 0 00:13:07.303 12:07:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.303 12:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.303 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.303 12:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.303 12:07:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.303 12:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.303 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.303 12:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.303 12:07:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.303 12:07:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.303 12:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.303 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.303 12:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.303 12:07:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.303 12:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.303 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.303 [2024-06-11 12:07:20.166291] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.303 12:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.303 12:07:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.303 12:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.303 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.303 12:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.303 12:07:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.303 12:07:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.303 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.303 12:07:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.303 12:07:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.688 12:07:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.688 12:07:21 -- common/autotest_common.sh@1177 -- # local i=0 00:13:08.688 12:07:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.688 12:07:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:08.688 12:07:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:10.600 12:07:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:10.600 12:07:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:10.861 12:07:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.862 12:07:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:10.862 12:07:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.862 12:07:23 -- common/autotest_common.sh@1187 -- # return 0 00:13:10.862 12:07:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.862 12:07:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.862 12:07:23 -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.862 12:07:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:10.862 12:07:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.862 12:07:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:10.862 12:07:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.862 12:07:23 -- common/autotest_common.sh@1210 -- # return 0 00:13:10.862 12:07:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.862 12:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.862 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:13:10.862 12:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.862 12:07:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.862 12:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.862 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:13:10.862 12:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.862 12:07:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.862 12:07:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.862 12:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.862 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:13:10.862 12:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.862 12:07:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.862 12:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.862 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:13:10.862 [2024-06-11 12:07:23.833815] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.862 12:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.862 12:07:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.862 12:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.862 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:13:10.862 12:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.862 12:07:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.862 12:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.862 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:13:10.862 12:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.862 12:07:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.776 12:07:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.776 12:07:25 -- common/autotest_common.sh@1177 -- # local i=0 00:13:12.776 12:07:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.776 12:07:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:12.776 12:07:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:14.688 12:07:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:14.688 12:07:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:14.688 12:07:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.688 12:07:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:14.688 12:07:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.688 12:07:27 -- common/autotest_common.sh@1187 -- # return 0 00:13:14.688 12:07:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.688 12:07:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.688 12:07:27 -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.688 12:07:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:14.688 12:07:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.688 12:07:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:14.688 12:07:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.688 12:07:27 -- common/autotest_common.sh@1210 -- # return 0 00:13:14.688 12:07:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@99 -- # seq 1 5 00:13:14.688 12:07:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.688 12:07:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 [2024-06-11 12:07:27.497977] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.688 12:07:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.688 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.688 12:07:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.688 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.688 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 [2024-06-11 12:07:27.554100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.689 12:07:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 [2024-06-11 12:07:27.614272] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.689 12:07:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 [2024-06-11 12:07:27.670459] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:14.689 12:07:27 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.689 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.689 12:07:27 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.689 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.689 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 [2024-06-11 12:07:27.726657] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.950 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.950 12:07:27 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:14.950 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.950 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.950 12:07:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.950 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.950 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.950 12:07:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.950 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.950 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.950 12:07:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.950 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.950 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.950 12:07:27 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:14.950 12:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.950 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.950 12:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.950 12:07:27 -- target/rpc.sh@110 -- # stats='{ 00:13:14.950 "tick_rate": 2400000000, 00:13:14.950 "poll_groups": [ 00:13:14.950 { 00:13:14.950 "name": "nvmf_tgt_poll_group_0", 00:13:14.950 "admin_qpairs": 0, 00:13:14.950 "io_qpairs": 224, 00:13:14.950 "current_admin_qpairs": 0, 00:13:14.950 "current_io_qpairs": 0, 00:13:14.950 "pending_bdev_io": 0, 00:13:14.950 "completed_nvme_io": 230, 00:13:14.950 "transports": [ 00:13:14.950 { 00:13:14.950 "trtype": "TCP" 00:13:14.950 } 00:13:14.950 ] 00:13:14.950 }, 00:13:14.950 { 00:13:14.950 "name": "nvmf_tgt_poll_group_1", 00:13:14.950 "admin_qpairs": 1, 00:13:14.951 "io_qpairs": 223, 00:13:14.951 "current_admin_qpairs": 0, 00:13:14.951 "current_io_qpairs": 0, 00:13:14.951 "pending_bdev_io": 0, 00:13:14.951 "completed_nvme_io": 272, 00:13:14.951 "transports": [ 00:13:14.951 { 00:13:14.951 "trtype": "TCP" 00:13:14.951 } 00:13:14.951 ] 00:13:14.951 }, 00:13:14.951 { 00:13:14.951 "name": "nvmf_tgt_poll_group_2", 00:13:14.951 "admin_qpairs": 6, 00:13:14.951 "io_qpairs": 218, 00:13:14.951 "current_admin_qpairs": 0, 00:13:14.951 "current_io_qpairs": 0, 00:13:14.951 "pending_bdev_io": 0, 00:13:14.951 "completed_nvme_io": 512, 00:13:14.951 "transports": [ 00:13:14.951 { 00:13:14.951 "trtype": "TCP" 00:13:14.951 } 00:13:14.951 ] 00:13:14.951 }, 00:13:14.951 { 00:13:14.951 "name": "nvmf_tgt_poll_group_3", 00:13:14.951 "admin_qpairs": 0, 00:13:14.951 "io_qpairs": 224, 00:13:14.951 "current_admin_qpairs": 0, 00:13:14.951 "current_io_qpairs": 0, 00:13:14.951 "pending_bdev_io": 0, 00:13:14.951 "completed_nvme_io": 225, 00:13:14.951 "transports": [ 00:13:14.951 { 00:13:14.951 "trtype": "TCP" 00:13:14.951 } 00:13:14.951 ] 00:13:14.951 } 00:13:14.951 ] 00:13:14.951 }' 00:13:14.951 12:07:27 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:14.951 12:07:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:14.951 12:07:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:14.951 12:07:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.951 12:07:27 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:14.951 12:07:27 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:14.951 12:07:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:14.951 12:07:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:14.951 12:07:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:14.951 12:07:27 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:14.951 12:07:27 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:14.951 12:07:27 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:14.951 12:07:27 -- target/rpc.sh@123 -- # nvmftestfini 00:13:14.951 12:07:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:14.951 12:07:27 -- nvmf/common.sh@116 -- # sync 00:13:14.951 12:07:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:14.951 12:07:27 -- nvmf/common.sh@119 -- # set +e 00:13:14.951 12:07:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:14.951 12:07:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:14.951 rmmod nvme_tcp 00:13:14.951 rmmod nvme_fabrics 00:13:14.951 rmmod nvme_keyring 00:13:14.951 12:07:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:14.951 12:07:27 -- nvmf/common.sh@123 -- # set -e 00:13:14.951 12:07:27 -- nvmf/common.sh@124 -- # return 0 00:13:14.951 12:07:27 -- nvmf/common.sh@477 -- # '[' -n 1373513 ']' 00:13:14.951 12:07:27 -- nvmf/common.sh@478 -- # killprocess 1373513 00:13:14.951 12:07:27 -- common/autotest_common.sh@926 -- # '[' -z 1373513 ']' 00:13:14.951 12:07:27 -- common/autotest_common.sh@930 -- # kill -0 1373513 00:13:14.951 12:07:27 -- common/autotest_common.sh@931 -- # uname 00:13:14.951 12:07:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:14.951 12:07:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1373513 00:13:15.212 12:07:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:15.212 12:07:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:15.212 12:07:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1373513' 00:13:15.212 killing process with pid 1373513 00:13:15.212 12:07:28 -- common/autotest_common.sh@945 -- # kill 1373513 00:13:15.212 12:07:28 -- common/autotest_common.sh@950 -- # wait 1373513 00:13:15.212 12:07:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:15.212 12:07:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:15.212 12:07:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:15.212 12:07:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.212 12:07:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:15.212 12:07:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.212 12:07:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.212 12:07:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.759 12:07:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:17.759 00:13:17.759 real 0m36.929s 00:13:17.759 user 1m51.184s 00:13:17.759 sys 0m6.971s 00:13:17.759 12:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.759 12:07:30 -- common/autotest_common.sh@10 -- # set +x 00:13:17.759 ************************************ 00:13:17.759 END TEST nvmf_rpc 00:13:17.759 ************************************ 00:13:17.759 12:07:30 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:17.759 12:07:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:17.759 12:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:17.759 12:07:30 -- common/autotest_common.sh@10 -- # set +x 00:13:17.759 ************************************ 00:13:17.759 START TEST nvmf_invalid 00:13:17.759 ************************************ 00:13:17.759 12:07:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:17.759 * Looking for test storage... 00:13:17.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.759 12:07:30 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.759 12:07:30 -- nvmf/common.sh@7 -- # uname -s 00:13:17.759 12:07:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.759 12:07:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.759 12:07:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.759 12:07:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.759 12:07:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.759 12:07:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.759 12:07:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.759 12:07:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.759 12:07:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.759 12:07:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.759 12:07:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:17.759 12:07:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:17.759 12:07:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.759 12:07:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.759 12:07:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.759 12:07:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.759 12:07:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.759 12:07:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.759 12:07:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.759 12:07:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.759 12:07:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.759 12:07:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.759 12:07:30 -- paths/export.sh@5 -- # export PATH 00:13:17.759 12:07:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.759 12:07:30 -- nvmf/common.sh@46 -- # : 0 00:13:17.759 12:07:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:17.759 12:07:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:17.759 12:07:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:17.759 12:07:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.759 12:07:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.759 12:07:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:17.759 12:07:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:17.759 12:07:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:17.759 12:07:30 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:17.759 12:07:30 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.759 12:07:30 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:17.759 12:07:30 -- target/invalid.sh@14 -- # target=foobar 00:13:17.759 12:07:30 -- target/invalid.sh@16 -- # RANDOM=0 00:13:17.759 12:07:30 -- target/invalid.sh@34 -- # nvmftestinit 00:13:17.759 12:07:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:17.759 12:07:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.759 12:07:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:17.759 12:07:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:17.759 12:07:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:17.759 12:07:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.759 12:07:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.759 12:07:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.759 12:07:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:17.759 12:07:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:17.759 12:07:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:17.759 12:07:30 -- common/autotest_common.sh@10 -- # set +x 00:13:24.349 12:07:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:24.349 12:07:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:24.349 12:07:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:24.349 12:07:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:24.349 12:07:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:24.349 12:07:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:24.349 12:07:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:24.349 12:07:37 -- nvmf/common.sh@294 -- # net_devs=() 00:13:24.349 12:07:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:24.349 12:07:37 -- nvmf/common.sh@295 -- # e810=() 00:13:24.349 12:07:37 -- nvmf/common.sh@295 -- # local -ga e810 00:13:24.349 12:07:37 -- nvmf/common.sh@296 -- # x722=() 00:13:24.349 12:07:37 -- nvmf/common.sh@296 -- # local -ga x722 00:13:24.349 12:07:37 -- nvmf/common.sh@297 -- # mlx=() 00:13:24.349 12:07:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:24.349 12:07:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.349 12:07:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:24.349 12:07:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:24.349 12:07:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:24.349 12:07:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:24.349 12:07:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:24.349 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:24.349 12:07:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:24.349 12:07:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:24.349 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:24.349 12:07:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:24.349 12:07:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:24.349 12:07:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.349 12:07:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:24.349 12:07:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.349 12:07:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:24.349 Found net devices under 0000:31:00.0: cvl_0_0 00:13:24.349 12:07:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.349 12:07:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:24.349 12:07:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.349 12:07:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:24.349 12:07:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.349 12:07:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:24.349 Found net devices under 0000:31:00.1: cvl_0_1 00:13:24.349 12:07:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.349 12:07:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:24.349 12:07:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:24.349 12:07:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:24.349 12:07:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:24.349 12:07:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.349 12:07:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.349 12:07:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.349 12:07:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:24.349 12:07:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.349 12:07:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.349 12:07:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:24.349 12:07:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.349 12:07:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.349 12:07:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:24.349 12:07:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:24.349 12:07:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.349 12:07:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.349 12:07:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.349 12:07:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.349 12:07:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:24.349 12:07:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.611 12:07:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.611 12:07:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.611 12:07:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:24.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:13:24.611 00:13:24.611 --- 10.0.0.2 ping statistics --- 00:13:24.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.611 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:13:24.611 12:07:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:13:24.611 00:13:24.611 --- 10.0.0.1 ping statistics --- 00:13:24.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.611 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:13:24.611 12:07:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.611 12:07:37 -- nvmf/common.sh@410 -- # return 0 00:13:24.611 12:07:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:24.611 12:07:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.611 12:07:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:24.611 12:07:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:24.611 12:07:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.611 12:07:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:24.611 12:07:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:24.611 12:07:37 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:24.611 12:07:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:24.611 12:07:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:24.611 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.611 12:07:37 -- nvmf/common.sh@469 -- # nvmfpid=1383183 00:13:24.611 12:07:37 -- nvmf/common.sh@470 -- # waitforlisten 1383183 00:13:24.611 12:07:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.611 12:07:37 -- common/autotest_common.sh@819 -- # '[' -z 1383183 ']' 00:13:24.611 12:07:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.611 12:07:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:24.611 12:07:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.611 12:07:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:24.611 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.611 [2024-06-11 12:07:37.576927] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:24.611 [2024-06-11 12:07:37.576988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.611 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.872 [2024-06-11 12:07:37.649270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.873 [2024-06-11 12:07:37.686739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.873 [2024-06-11 12:07:37.686882] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.873 [2024-06-11 12:07:37.686893] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.873 [2024-06-11 12:07:37.686901] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.873 [2024-06-11 12:07:37.687084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.873 [2024-06-11 12:07:37.687298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.873 [2024-06-11 12:07:37.687299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.873 [2024-06-11 12:07:37.687157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.445 12:07:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:25.445 12:07:38 -- common/autotest_common.sh@852 -- # return 0 00:13:25.445 12:07:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:25.445 12:07:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:25.445 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:13:25.445 12:07:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.445 12:07:38 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:25.445 12:07:38 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5571 00:13:25.706 [2024-06-11 12:07:38.530701] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:25.706 12:07:38 -- target/invalid.sh@40 -- # out='request: 00:13:25.706 { 00:13:25.706 "nqn": "nqn.2016-06.io.spdk:cnode5571", 00:13:25.706 "tgt_name": "foobar", 00:13:25.706 "method": "nvmf_create_subsystem", 00:13:25.706 "req_id": 1 00:13:25.706 } 00:13:25.706 Got JSON-RPC error response 00:13:25.706 response: 00:13:25.706 { 00:13:25.706 "code": -32603, 00:13:25.707 "message": "Unable to find target foobar" 00:13:25.707 }' 00:13:25.707 12:07:38 -- target/invalid.sh@41 -- # [[ request: 00:13:25.707 { 00:13:25.707 "nqn": "nqn.2016-06.io.spdk:cnode5571", 00:13:25.707 "tgt_name": "foobar", 00:13:25.707 "method": "nvmf_create_subsystem", 00:13:25.707 "req_id": 1 00:13:25.707 } 00:13:25.707 Got JSON-RPC error response 00:13:25.707 response: 00:13:25.707 { 00:13:25.707 "code": -32603, 00:13:25.707 "message": "Unable to find target foobar" 00:13:25.707 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:25.707 12:07:38 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:25.707 12:07:38 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32094 00:13:25.707 [2024-06-11 12:07:38.703314] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32094: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:25.707 12:07:38 -- target/invalid.sh@45 -- # out='request: 00:13:25.707 { 00:13:25.707 "nqn": "nqn.2016-06.io.spdk:cnode32094", 00:13:25.707 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:25.707 "method": "nvmf_create_subsystem", 00:13:25.707 "req_id": 1 00:13:25.707 } 00:13:25.707 Got JSON-RPC error response 00:13:25.707 response: 00:13:25.707 { 00:13:25.707 "code": -32602, 00:13:25.707 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:25.707 }' 00:13:25.707 12:07:38 -- target/invalid.sh@46 -- # [[ request: 00:13:25.707 { 00:13:25.707 "nqn": "nqn.2016-06.io.spdk:cnode32094", 00:13:25.707 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:25.707 "method": "nvmf_create_subsystem", 00:13:25.707 "req_id": 1 00:13:25.707 } 00:13:25.707 Got JSON-RPC error response 00:13:25.707 response: 00:13:25.707 { 00:13:25.707 "code": -32602, 00:13:25.707 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:25.707 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:25.707 12:07:38 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:25.707 12:07:38 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18177 00:13:25.969 [2024-06-11 12:07:38.871802] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18177: invalid model number 'SPDK_Controller' 00:13:25.969 12:07:38 -- target/invalid.sh@50 -- # out='request: 00:13:25.969 { 00:13:25.969 "nqn": "nqn.2016-06.io.spdk:cnode18177", 00:13:25.969 "model_number": "SPDK_Controller\u001f", 00:13:25.969 "method": "nvmf_create_subsystem", 00:13:25.969 "req_id": 1 00:13:25.969 } 00:13:25.969 Got JSON-RPC error response 00:13:25.969 response: 00:13:25.969 { 00:13:25.969 "code": -32602, 00:13:25.969 "message": "Invalid MN SPDK_Controller\u001f" 00:13:25.969 }' 00:13:25.969 12:07:38 -- target/invalid.sh@51 -- # [[ request: 00:13:25.969 { 00:13:25.969 "nqn": "nqn.2016-06.io.spdk:cnode18177", 00:13:25.969 "model_number": "SPDK_Controller\u001f", 00:13:25.969 "method": "nvmf_create_subsystem", 00:13:25.969 "req_id": 1 00:13:25.969 } 00:13:25.969 Got JSON-RPC error response 00:13:25.969 response: 00:13:25.969 { 00:13:25.969 "code": -32602, 00:13:25.969 "message": "Invalid MN SPDK_Controller\u001f" 00:13:25.969 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:25.969 12:07:38 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:25.969 12:07:38 -- target/invalid.sh@19 -- # local length=21 ll 00:13:25.969 12:07:38 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:25.969 12:07:38 -- target/invalid.sh@21 -- # local chars 00:13:25.969 12:07:38 -- target/invalid.sh@22 -- # local string 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 95 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=_ 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 99 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=c 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 97 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=a 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 120 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=x 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 75 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=K 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 82 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=R 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 59 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=';' 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 59 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=';' 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 104 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=h 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 40 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+='(' 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 47 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=/ 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 53 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # string+=5 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:38 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # printf %x 90 00:13:25.969 12:07:38 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:25.969 12:07:39 -- target/invalid.sh@25 -- # string+=Z 00:13:25.969 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:25.969 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # printf %x 74 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # string+=J 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # printf %x 48 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # string+=0 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # printf %x 73 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # string+=I 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # printf %x 35 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # string+='#' 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # printf %x 125 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # string+='}' 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.231 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # printf %x 72 00:13:26.231 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # string+=H 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # printf %x 81 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # string+=Q 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # printf %x 34 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # string+='"' 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.232 12:07:39 -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:13:26.232 12:07:39 -- target/invalid.sh@31 -- # echo '_caxKR;;h(/5ZJ0I#}HQ"' 00:13:26.232 12:07:39 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '_caxKR;;h(/5ZJ0I#}HQ"' nqn.2016-06.io.spdk:cnode11715 00:13:26.232 [2024-06-11 12:07:39.200865] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11715: invalid serial number '_caxKR;;h(/5ZJ0I#}HQ"' 00:13:26.232 12:07:39 -- target/invalid.sh@54 -- # out='request: 00:13:26.232 { 00:13:26.232 "nqn": "nqn.2016-06.io.spdk:cnode11715", 00:13:26.232 "serial_number": "_caxKR;;h(/5ZJ0I#}HQ\"", 00:13:26.232 "method": "nvmf_create_subsystem", 00:13:26.232 "req_id": 1 00:13:26.232 } 00:13:26.232 Got JSON-RPC error response 00:13:26.232 response: 00:13:26.232 { 00:13:26.232 "code": -32602, 00:13:26.232 "message": "Invalid SN _caxKR;;h(/5ZJ0I#}HQ\"" 00:13:26.232 }' 00:13:26.232 12:07:39 -- target/invalid.sh@55 -- # [[ request: 00:13:26.232 { 00:13:26.232 "nqn": "nqn.2016-06.io.spdk:cnode11715", 00:13:26.232 "serial_number": "_caxKR;;h(/5ZJ0I#}HQ\"", 00:13:26.232 "method": "nvmf_create_subsystem", 00:13:26.232 "req_id": 1 00:13:26.232 } 00:13:26.232 Got JSON-RPC error response 00:13:26.232 response: 00:13:26.232 { 00:13:26.232 "code": -32602, 00:13:26.232 "message": "Invalid SN _caxKR;;h(/5ZJ0I#}HQ\"" 00:13:26.232 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:26.232 12:07:39 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:26.232 12:07:39 -- target/invalid.sh@19 -- # local length=41 ll 00:13:26.232 12:07:39 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:26.232 12:07:39 -- target/invalid.sh@21 -- # local chars 00:13:26.232 12:07:39 -- target/invalid.sh@22 -- # local string 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # printf %x 55 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # string+=7 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # printf %x 53 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # string+=5 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # printf %x 97 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # string+=a 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.232 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # printf %x 76 00:13:26.232 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # string+=L 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # printf %x 35 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # string+='#' 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # printf %x 55 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # string+=7 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # printf %x 124 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # string+='|' 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # printf %x 107 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # string+=k 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # printf %x 78 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:26.493 12:07:39 -- target/invalid.sh@25 -- # string+=N 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.493 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 92 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='\' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 85 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=U 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 121 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=y 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 125 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='}' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 40 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='(' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 104 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=h 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 71 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=G 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 115 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=s 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 46 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=. 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 86 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=V 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 125 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='}' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 95 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=_ 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 78 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=N 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 83 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=S 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 88 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=X 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 62 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='>' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 86 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=V 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 71 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=G 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 127 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 75 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=K 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 78 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=N 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 55 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=7 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 122 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=z 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 91 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='[' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 108 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=l 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 36 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='$' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 34 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='"' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 95 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=_ 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 68 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=D 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 63 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+='?' 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # printf %x 97 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:26.494 12:07:39 -- target/invalid.sh@25 -- # string+=a 00:13:26.494 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.755 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.755 12:07:39 -- target/invalid.sh@25 -- # printf %x 109 00:13:26.755 12:07:39 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:26.755 12:07:39 -- target/invalid.sh@25 -- # string+=m 00:13:26.755 12:07:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:26.755 12:07:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:26.755 12:07:39 -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:13:26.755 12:07:39 -- target/invalid.sh@31 -- # echo '75aL#7|kN\Uy}(hGs.V}_NSX>VGKN7z[l$"_D?am' 00:13:26.755 12:07:39 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '75aL#7|kN\Uy}(hGs.V}_NSX>VGKN7z[l$"_D?am' nqn.2016-06.io.spdk:cnode8571 00:13:26.755 [2024-06-11 12:07:39.674376] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8571: invalid model number '75aL#7|kN\Uy}(hGs.V}_NSX>VGKN7z[l$"_D?am' 00:13:26.755 12:07:39 -- target/invalid.sh@58 -- # out='request: 00:13:26.755 { 00:13:26.755 "nqn": "nqn.2016-06.io.spdk:cnode8571", 00:13:26.755 "model_number": "75aL#7|kN\\Uy}(hGs.V}_NSX>VG\u007fKN7z[l$\"_D?am", 00:13:26.755 "method": "nvmf_create_subsystem", 00:13:26.755 "req_id": 1 00:13:26.755 } 00:13:26.755 Got JSON-RPC error response 00:13:26.755 response: 00:13:26.755 { 00:13:26.755 "code": -32602, 00:13:26.755 "message": "Invalid MN 75aL#7|kN\\Uy}(hGs.V}_NSX>VG\u007fKN7z[l$\"_D?am" 00:13:26.755 }' 00:13:26.755 12:07:39 -- target/invalid.sh@59 -- # [[ request: 00:13:26.755 { 00:13:26.755 "nqn": "nqn.2016-06.io.spdk:cnode8571", 00:13:26.755 "model_number": "75aL#7|kN\\Uy}(hGs.V}_NSX>VG\u007fKN7z[l$\"_D?am", 00:13:26.755 "method": "nvmf_create_subsystem", 00:13:26.755 "req_id": 1 00:13:26.755 } 00:13:26.755 Got JSON-RPC error response 00:13:26.755 response: 00:13:26.755 { 00:13:26.755 "code": -32602, 00:13:26.755 "message": "Invalid MN 75aL#7|kN\\Uy}(hGs.V}_NSX>VG\u007fKN7z[l$\"_D?am" 00:13:26.755 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:26.755 12:07:39 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:27.015 [2024-06-11 12:07:39.838977] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.015 12:07:39 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:27.015 12:07:40 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:27.015 12:07:40 -- target/invalid.sh@67 -- # echo '' 00:13:27.015 12:07:40 -- target/invalid.sh@67 -- # head -n 1 00:13:27.015 12:07:40 -- target/invalid.sh@67 -- # IP= 00:13:27.015 12:07:40 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:27.276 [2024-06-11 12:07:40.184159] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:27.276 12:07:40 -- target/invalid.sh@69 -- # out='request: 00:13:27.276 { 00:13:27.276 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:27.276 "listen_address": { 00:13:27.276 "trtype": "tcp", 00:13:27.276 "traddr": "", 00:13:27.276 "trsvcid": "4421" 00:13:27.276 }, 00:13:27.276 "method": "nvmf_subsystem_remove_listener", 00:13:27.276 "req_id": 1 00:13:27.276 } 00:13:27.276 Got JSON-RPC error response 00:13:27.276 response: 00:13:27.276 { 00:13:27.276 "code": -32602, 00:13:27.276 "message": "Invalid parameters" 00:13:27.276 }' 00:13:27.276 12:07:40 -- target/invalid.sh@70 -- # [[ request: 00:13:27.276 { 00:13:27.276 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:27.276 "listen_address": { 00:13:27.276 "trtype": "tcp", 00:13:27.276 "traddr": "", 00:13:27.276 "trsvcid": "4421" 00:13:27.276 }, 00:13:27.276 "method": "nvmf_subsystem_remove_listener", 00:13:27.276 "req_id": 1 00:13:27.276 } 00:13:27.276 Got JSON-RPC error response 00:13:27.276 response: 00:13:27.276 { 00:13:27.276 "code": -32602, 00:13:27.276 "message": "Invalid parameters" 00:13:27.276 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:27.276 12:07:40 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7468 -i 0 00:13:27.535 [2024-06-11 12:07:40.352683] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7468: invalid cntlid range [0-65519] 00:13:27.535 12:07:40 -- target/invalid.sh@73 -- # out='request: 00:13:27.535 { 00:13:27.535 "nqn": "nqn.2016-06.io.spdk:cnode7468", 00:13:27.535 "min_cntlid": 0, 00:13:27.535 "method": "nvmf_create_subsystem", 00:13:27.535 "req_id": 1 00:13:27.535 } 00:13:27.535 Got JSON-RPC error response 00:13:27.535 response: 00:13:27.535 { 00:13:27.535 "code": -32602, 00:13:27.535 "message": "Invalid cntlid range [0-65519]" 00:13:27.535 }' 00:13:27.535 12:07:40 -- target/invalid.sh@74 -- # [[ request: 00:13:27.535 { 00:13:27.535 "nqn": "nqn.2016-06.io.spdk:cnode7468", 00:13:27.535 "min_cntlid": 0, 00:13:27.535 "method": "nvmf_create_subsystem", 00:13:27.535 "req_id": 1 00:13:27.535 } 00:13:27.535 Got JSON-RPC error response 00:13:27.535 response: 00:13:27.535 { 00:13:27.535 "code": -32602, 00:13:27.535 "message": "Invalid cntlid range [0-65519]" 00:13:27.535 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:27.535 12:07:40 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2449 -i 65520 00:13:27.535 [2024-06-11 12:07:40.521215] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2449: invalid cntlid range [65520-65519] 00:13:27.535 12:07:40 -- target/invalid.sh@75 -- # out='request: 00:13:27.535 { 00:13:27.535 "nqn": "nqn.2016-06.io.spdk:cnode2449", 00:13:27.535 "min_cntlid": 65520, 00:13:27.535 "method": "nvmf_create_subsystem", 00:13:27.535 "req_id": 1 00:13:27.535 } 00:13:27.535 Got JSON-RPC error response 00:13:27.535 response: 00:13:27.535 { 00:13:27.535 "code": -32602, 00:13:27.535 "message": "Invalid cntlid range [65520-65519]" 00:13:27.535 }' 00:13:27.535 12:07:40 -- target/invalid.sh@76 -- # [[ request: 00:13:27.535 { 00:13:27.536 "nqn": "nqn.2016-06.io.spdk:cnode2449", 00:13:27.536 "min_cntlid": 65520, 00:13:27.536 "method": "nvmf_create_subsystem", 00:13:27.536 "req_id": 1 00:13:27.536 } 00:13:27.536 Got JSON-RPC error response 00:13:27.536 response: 00:13:27.536 { 00:13:27.536 "code": -32602, 00:13:27.536 "message": "Invalid cntlid range [65520-65519]" 00:13:27.536 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:27.536 12:07:40 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29329 -I 0 00:13:27.795 [2024-06-11 12:07:40.689772] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29329: invalid cntlid range [1-0] 00:13:27.795 12:07:40 -- target/invalid.sh@77 -- # out='request: 00:13:27.795 { 00:13:27.795 "nqn": "nqn.2016-06.io.spdk:cnode29329", 00:13:27.795 "max_cntlid": 0, 00:13:27.795 "method": "nvmf_create_subsystem", 00:13:27.795 "req_id": 1 00:13:27.795 } 00:13:27.795 Got JSON-RPC error response 00:13:27.795 response: 00:13:27.795 { 00:13:27.795 "code": -32602, 00:13:27.795 "message": "Invalid cntlid range [1-0]" 00:13:27.795 }' 00:13:27.795 12:07:40 -- target/invalid.sh@78 -- # [[ request: 00:13:27.795 { 00:13:27.795 "nqn": "nqn.2016-06.io.spdk:cnode29329", 00:13:27.795 "max_cntlid": 0, 00:13:27.795 "method": "nvmf_create_subsystem", 00:13:27.796 "req_id": 1 00:13:27.796 } 00:13:27.796 Got JSON-RPC error response 00:13:27.796 response: 00:13:27.796 { 00:13:27.796 "code": -32602, 00:13:27.796 "message": "Invalid cntlid range [1-0]" 00:13:27.796 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:27.796 12:07:40 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9285 -I 65520 00:13:28.055 [2024-06-11 12:07:40.850326] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9285: invalid cntlid range [1-65520] 00:13:28.055 12:07:40 -- target/invalid.sh@79 -- # out='request: 00:13:28.055 { 00:13:28.055 "nqn": "nqn.2016-06.io.spdk:cnode9285", 00:13:28.055 "max_cntlid": 65520, 00:13:28.055 "method": "nvmf_create_subsystem", 00:13:28.055 "req_id": 1 00:13:28.055 } 00:13:28.055 Got JSON-RPC error response 00:13:28.055 response: 00:13:28.055 { 00:13:28.055 "code": -32602, 00:13:28.055 "message": "Invalid cntlid range [1-65520]" 00:13:28.055 }' 00:13:28.055 12:07:40 -- target/invalid.sh@80 -- # [[ request: 00:13:28.055 { 00:13:28.055 "nqn": "nqn.2016-06.io.spdk:cnode9285", 00:13:28.055 "max_cntlid": 65520, 00:13:28.055 "method": "nvmf_create_subsystem", 00:13:28.055 "req_id": 1 00:13:28.055 } 00:13:28.055 Got JSON-RPC error response 00:13:28.055 response: 00:13:28.055 { 00:13:28.056 "code": -32602, 00:13:28.056 "message": "Invalid cntlid range [1-65520]" 00:13:28.056 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:28.056 12:07:40 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode42 -i 6 -I 5 00:13:28.056 [2024-06-11 12:07:41.010850] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode42: invalid cntlid range [6-5] 00:13:28.056 12:07:41 -- target/invalid.sh@83 -- # out='request: 00:13:28.056 { 00:13:28.056 "nqn": "nqn.2016-06.io.spdk:cnode42", 00:13:28.056 "min_cntlid": 6, 00:13:28.056 "max_cntlid": 5, 00:13:28.056 "method": "nvmf_create_subsystem", 00:13:28.056 "req_id": 1 00:13:28.056 } 00:13:28.056 Got JSON-RPC error response 00:13:28.056 response: 00:13:28.056 { 00:13:28.056 "code": -32602, 00:13:28.056 "message": "Invalid cntlid range [6-5]" 00:13:28.056 }' 00:13:28.056 12:07:41 -- target/invalid.sh@84 -- # [[ request: 00:13:28.056 { 00:13:28.056 "nqn": "nqn.2016-06.io.spdk:cnode42", 00:13:28.056 "min_cntlid": 6, 00:13:28.056 "max_cntlid": 5, 00:13:28.056 "method": "nvmf_create_subsystem", 00:13:28.056 "req_id": 1 00:13:28.056 } 00:13:28.056 Got JSON-RPC error response 00:13:28.056 response: 00:13:28.056 { 00:13:28.056 "code": -32602, 00:13:28.056 "message": "Invalid cntlid range [6-5]" 00:13:28.056 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:28.056 12:07:41 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:28.315 12:07:41 -- target/invalid.sh@87 -- # out='request: 00:13:28.315 { 00:13:28.315 "name": "foobar", 00:13:28.315 "method": "nvmf_delete_target", 00:13:28.315 "req_id": 1 00:13:28.315 } 00:13:28.315 Got JSON-RPC error response 00:13:28.315 response: 00:13:28.315 { 00:13:28.315 "code": -32602, 00:13:28.315 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:28.315 }' 00:13:28.315 12:07:41 -- target/invalid.sh@88 -- # [[ request: 00:13:28.315 { 00:13:28.315 "name": "foobar", 00:13:28.315 "method": "nvmf_delete_target", 00:13:28.315 "req_id": 1 00:13:28.315 } 00:13:28.315 Got JSON-RPC error response 00:13:28.315 response: 00:13:28.315 { 00:13:28.315 "code": -32602, 00:13:28.315 "message": "The specified target doesn't exist, cannot delete it." 00:13:28.315 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:28.315 12:07:41 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:28.315 12:07:41 -- target/invalid.sh@91 -- # nvmftestfini 00:13:28.315 12:07:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:28.315 12:07:41 -- nvmf/common.sh@116 -- # sync 00:13:28.315 12:07:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:28.315 12:07:41 -- nvmf/common.sh@119 -- # set +e 00:13:28.315 12:07:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:28.315 12:07:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:28.315 rmmod nvme_tcp 00:13:28.315 rmmod nvme_fabrics 00:13:28.315 rmmod nvme_keyring 00:13:28.315 12:07:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:28.315 12:07:41 -- nvmf/common.sh@123 -- # set -e 00:13:28.315 12:07:41 -- nvmf/common.sh@124 -- # return 0 00:13:28.315 12:07:41 -- nvmf/common.sh@477 -- # '[' -n 1383183 ']' 00:13:28.316 12:07:41 -- nvmf/common.sh@478 -- # killprocess 1383183 00:13:28.316 12:07:41 -- common/autotest_common.sh@926 -- # '[' -z 1383183 ']' 00:13:28.316 12:07:41 -- common/autotest_common.sh@930 -- # kill -0 1383183 00:13:28.316 12:07:41 -- common/autotest_common.sh@931 -- # uname 00:13:28.316 12:07:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:28.316 12:07:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1383183 00:13:28.316 12:07:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:28.316 12:07:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:28.316 12:07:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1383183' 00:13:28.316 killing process with pid 1383183 00:13:28.316 12:07:41 -- common/autotest_common.sh@945 -- # kill 1383183 00:13:28.316 12:07:41 -- common/autotest_common.sh@950 -- # wait 1383183 00:13:28.576 12:07:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:28.576 12:07:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:28.576 12:07:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:28.576 12:07:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.576 12:07:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:28.576 12:07:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.576 12:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.576 12:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.486 12:07:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:30.486 00:13:30.486 real 0m13.184s 00:13:30.486 user 0m18.774s 00:13:30.486 sys 0m6.220s 00:13:30.486 12:07:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.486 12:07:43 -- common/autotest_common.sh@10 -- # set +x 00:13:30.486 ************************************ 00:13:30.486 END TEST nvmf_invalid 00:13:30.486 ************************************ 00:13:30.486 12:07:43 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:30.486 12:07:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:30.486 12:07:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.486 12:07:43 -- common/autotest_common.sh@10 -- # set +x 00:13:30.486 ************************************ 00:13:30.486 START TEST nvmf_abort 00:13:30.486 ************************************ 00:13:30.486 12:07:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:30.747 * Looking for test storage... 00:13:30.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.747 12:07:43 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.747 12:07:43 -- nvmf/common.sh@7 -- # uname -s 00:13:30.747 12:07:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.747 12:07:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.747 12:07:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.747 12:07:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.747 12:07:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.747 12:07:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.747 12:07:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.747 12:07:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.747 12:07:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.747 12:07:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.747 12:07:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.747 12:07:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.747 12:07:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.747 12:07:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.747 12:07:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.747 12:07:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.747 12:07:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.747 12:07:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.747 12:07:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.747 12:07:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.747 12:07:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.747 12:07:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.747 12:07:43 -- paths/export.sh@5 -- # export PATH 00:13:30.747 12:07:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.747 12:07:43 -- nvmf/common.sh@46 -- # : 0 00:13:30.747 12:07:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:30.747 12:07:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:30.747 12:07:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:30.747 12:07:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.747 12:07:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.747 12:07:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:30.747 12:07:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:30.747 12:07:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:30.747 12:07:43 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.747 12:07:43 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:30.747 12:07:43 -- target/abort.sh@14 -- # nvmftestinit 00:13:30.747 12:07:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:30.747 12:07:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.747 12:07:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:30.747 12:07:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:30.747 12:07:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:30.747 12:07:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.747 12:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.747 12:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.747 12:07:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:30.747 12:07:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:30.747 12:07:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:30.747 12:07:43 -- common/autotest_common.sh@10 -- # set +x 00:13:38.966 12:07:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:38.966 12:07:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:38.966 12:07:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:38.966 12:07:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:38.966 12:07:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:38.966 12:07:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:38.966 12:07:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:38.966 12:07:50 -- nvmf/common.sh@294 -- # net_devs=() 00:13:38.966 12:07:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:38.966 12:07:50 -- nvmf/common.sh@295 -- # e810=() 00:13:38.966 12:07:50 -- nvmf/common.sh@295 -- # local -ga e810 00:13:38.966 12:07:50 -- nvmf/common.sh@296 -- # x722=() 00:13:38.966 12:07:50 -- nvmf/common.sh@296 -- # local -ga x722 00:13:38.966 12:07:50 -- nvmf/common.sh@297 -- # mlx=() 00:13:38.966 12:07:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:38.966 12:07:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.966 12:07:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:38.966 12:07:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:38.966 12:07:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:38.966 12:07:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:38.966 12:07:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:38.966 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:38.966 12:07:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:38.966 12:07:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:38.966 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:38.966 12:07:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:38.966 12:07:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:38.966 12:07:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.966 12:07:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:38.966 12:07:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.966 12:07:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:38.966 Found net devices under 0000:31:00.0: cvl_0_0 00:13:38.966 12:07:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.966 12:07:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:38.966 12:07:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.966 12:07:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:38.966 12:07:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.966 12:07:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:38.966 Found net devices under 0000:31:00.1: cvl_0_1 00:13:38.966 12:07:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.966 12:07:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:38.966 12:07:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:38.966 12:07:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:38.966 12:07:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:38.966 12:07:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.966 12:07:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.966 12:07:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.966 12:07:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:38.966 12:07:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.966 12:07:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.966 12:07:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:38.966 12:07:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.966 12:07:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.966 12:07:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:38.966 12:07:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:38.966 12:07:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.966 12:07:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.966 12:07:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.966 12:07:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.966 12:07:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:38.966 12:07:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.966 12:07:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.966 12:07:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.967 12:07:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:38.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:13:38.967 00:13:38.967 --- 10.0.0.2 ping statistics --- 00:13:38.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.967 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:13:38.967 12:07:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:13:38.967 00:13:38.967 --- 10.0.0.1 ping statistics --- 00:13:38.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.967 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:13:38.967 12:07:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.967 12:07:50 -- nvmf/common.sh@410 -- # return 0 00:13:38.967 12:07:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:38.967 12:07:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.967 12:07:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:38.967 12:07:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:38.967 12:07:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.967 12:07:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:38.967 12:07:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:38.967 12:07:50 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:38.967 12:07:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:38.967 12:07:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:38.967 12:07:50 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 12:07:50 -- nvmf/common.sh@469 -- # nvmfpid=1388446 00:13:38.967 12:07:50 -- nvmf/common.sh@470 -- # waitforlisten 1388446 00:13:38.967 12:07:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:38.967 12:07:50 -- common/autotest_common.sh@819 -- # '[' -z 1388446 ']' 00:13:38.967 12:07:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.967 12:07:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:38.967 12:07:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.967 12:07:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:38.967 12:07:50 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 [2024-06-11 12:07:50.954454] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:38.967 [2024-06-11 12:07:50.954524] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.967 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.967 [2024-06-11 12:07:51.044446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.967 [2024-06-11 12:07:51.088360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:38.967 [2024-06-11 12:07:51.088518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.967 [2024-06-11 12:07:51.088529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.967 [2024-06-11 12:07:51.088538] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.967 [2024-06-11 12:07:51.088697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.967 [2024-06-11 12:07:51.088862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.967 [2024-06-11 12:07:51.088863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.967 12:07:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:38.967 12:07:51 -- common/autotest_common.sh@852 -- # return 0 00:13:38.967 12:07:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:38.967 12:07:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 12:07:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.967 12:07:51 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:38.967 12:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 [2024-06-11 12:07:51.776342] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.967 12:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.967 12:07:51 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:38.967 12:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 Malloc0 00:13:38.967 12:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.967 12:07:51 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:38.967 12:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 Delay0 00:13:38.967 12:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.967 12:07:51 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:38.967 12:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 12:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.967 12:07:51 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:38.967 12:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 12:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.967 12:07:51 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:38.967 12:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 [2024-06-11 12:07:51.852509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.967 12:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.967 12:07:51 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:38.967 12:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.967 12:07:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.967 12:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.967 12:07:51 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:38.967 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.967 [2024-06-11 12:07:51.951489] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:41.515 Initializing NVMe Controllers 00:13:41.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:41.515 controller IO queue size 128 less than required 00:13:41.515 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:41.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:41.515 Initialization complete. Launching workers. 00:13:41.515 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 33824 00:13:41.515 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33888, failed to submit 62 00:13:41.515 success 33824, unsuccess 64, failed 0 00:13:41.515 12:07:53 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:41.515 12:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.515 12:07:53 -- common/autotest_common.sh@10 -- # set +x 00:13:41.515 12:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.515 12:07:53 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:41.515 12:07:53 -- target/abort.sh@38 -- # nvmftestfini 00:13:41.515 12:07:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:41.515 12:07:53 -- nvmf/common.sh@116 -- # sync 00:13:41.515 12:07:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:41.515 12:07:54 -- nvmf/common.sh@119 -- # set +e 00:13:41.515 12:07:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:41.515 12:07:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:41.515 rmmod nvme_tcp 00:13:41.515 rmmod nvme_fabrics 00:13:41.515 rmmod nvme_keyring 00:13:41.515 12:07:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:41.515 12:07:54 -- nvmf/common.sh@123 -- # set -e 00:13:41.515 12:07:54 -- nvmf/common.sh@124 -- # return 0 00:13:41.515 12:07:54 -- nvmf/common.sh@477 -- # '[' -n 1388446 ']' 00:13:41.515 12:07:54 -- nvmf/common.sh@478 -- # killprocess 1388446 00:13:41.515 12:07:54 -- common/autotest_common.sh@926 -- # '[' -z 1388446 ']' 00:13:41.515 12:07:54 -- common/autotest_common.sh@930 -- # kill -0 1388446 00:13:41.515 12:07:54 -- common/autotest_common.sh@931 -- # uname 00:13:41.515 12:07:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.515 12:07:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1388446 00:13:41.515 12:07:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:41.515 12:07:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:41.515 12:07:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1388446' 00:13:41.515 killing process with pid 1388446 00:13:41.515 12:07:54 -- common/autotest_common.sh@945 -- # kill 1388446 00:13:41.515 12:07:54 -- common/autotest_common.sh@950 -- # wait 1388446 00:13:41.515 12:07:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:41.515 12:07:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:41.515 12:07:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:41.515 12:07:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.515 12:07:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:41.515 12:07:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.515 12:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.515 12:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.428 12:07:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:43.428 00:13:43.428 real 0m12.826s 00:13:43.428 user 0m13.344s 00:13:43.428 sys 0m6.132s 00:13:43.428 12:07:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.428 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:13:43.428 ************************************ 00:13:43.428 END TEST nvmf_abort 00:13:43.428 ************************************ 00:13:43.429 12:07:56 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.429 12:07:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:43.429 12:07:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.429 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:13:43.429 ************************************ 00:13:43.429 START TEST nvmf_ns_hotplug_stress 00:13:43.429 ************************************ 00:13:43.429 12:07:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.429 * Looking for test storage... 00:13:43.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.429 12:07:56 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.429 12:07:56 -- nvmf/common.sh@7 -- # uname -s 00:13:43.429 12:07:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.429 12:07:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.429 12:07:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.429 12:07:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.690 12:07:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.690 12:07:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.690 12:07:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.690 12:07:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.690 12:07:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.690 12:07:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.690 12:07:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.690 12:07:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.690 12:07:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.690 12:07:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.690 12:07:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.690 12:07:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.690 12:07:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.690 12:07:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.690 12:07:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.690 12:07:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.690 12:07:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.690 12:07:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.690 12:07:56 -- paths/export.sh@5 -- # export PATH 00:13:43.690 12:07:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.690 12:07:56 -- nvmf/common.sh@46 -- # : 0 00:13:43.690 12:07:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.690 12:07:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.690 12:07:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.690 12:07:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.690 12:07:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.690 12:07:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.690 12:07:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.690 12:07:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.690 12:07:56 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.690 12:07:56 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:43.690 12:07:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.690 12:07:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.690 12:07:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.690 12:07:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.690 12:07:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.690 12:07:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.690 12:07:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.690 12:07:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.690 12:07:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:43.690 12:07:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:43.690 12:07:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:43.690 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:13:50.274 12:08:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:50.274 12:08:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:50.274 12:08:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:50.274 12:08:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:50.274 12:08:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:50.274 12:08:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:50.274 12:08:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:50.274 12:08:03 -- nvmf/common.sh@294 -- # net_devs=() 00:13:50.274 12:08:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:50.274 12:08:03 -- nvmf/common.sh@295 -- # e810=() 00:13:50.274 12:08:03 -- nvmf/common.sh@295 -- # local -ga e810 00:13:50.274 12:08:03 -- nvmf/common.sh@296 -- # x722=() 00:13:50.274 12:08:03 -- nvmf/common.sh@296 -- # local -ga x722 00:13:50.274 12:08:03 -- nvmf/common.sh@297 -- # mlx=() 00:13:50.274 12:08:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:50.274 12:08:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.274 12:08:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:50.274 12:08:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:50.274 12:08:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:50.274 12:08:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:50.274 12:08:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:50.274 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:50.274 12:08:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:50.274 12:08:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:50.274 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:50.274 12:08:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:50.274 12:08:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:50.274 12:08:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.274 12:08:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:50.274 12:08:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.274 12:08:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:50.274 Found net devices under 0000:31:00.0: cvl_0_0 00:13:50.274 12:08:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.274 12:08:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:50.274 12:08:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.274 12:08:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:50.274 12:08:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.274 12:08:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:50.274 Found net devices under 0000:31:00.1: cvl_0_1 00:13:50.274 12:08:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.274 12:08:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:50.274 12:08:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:50.274 12:08:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:50.274 12:08:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:50.274 12:08:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.274 12:08:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.274 12:08:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.274 12:08:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:50.274 12:08:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.274 12:08:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.274 12:08:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:50.274 12:08:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.274 12:08:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.274 12:08:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:50.274 12:08:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:50.274 12:08:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.274 12:08:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.535 12:08:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.535 12:08:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.535 12:08:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:50.535 12:08:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.535 12:08:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.535 12:08:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.535 12:08:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:50.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:13:50.535 00:13:50.535 --- 10.0.0.2 ping statistics --- 00:13:50.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.535 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:13:50.535 12:08:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:13:50.535 00:13:50.535 --- 10.0.0.1 ping statistics --- 00:13:50.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.535 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:13:50.535 12:08:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.535 12:08:03 -- nvmf/common.sh@410 -- # return 0 00:13:50.535 12:08:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:50.535 12:08:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.535 12:08:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:50.535 12:08:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:50.535 12:08:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.535 12:08:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:50.535 12:08:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:50.796 12:08:03 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:50.796 12:08:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:50.796 12:08:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:50.796 12:08:03 -- common/autotest_common.sh@10 -- # set +x 00:13:50.796 12:08:03 -- nvmf/common.sh@469 -- # nvmfpid=1393221 00:13:50.796 12:08:03 -- nvmf/common.sh@470 -- # waitforlisten 1393221 00:13:50.796 12:08:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:50.796 12:08:03 -- common/autotest_common.sh@819 -- # '[' -z 1393221 ']' 00:13:50.796 12:08:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.796 12:08:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:50.796 12:08:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.796 12:08:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:50.796 12:08:03 -- common/autotest_common.sh@10 -- # set +x 00:13:50.796 [2024-06-11 12:08:03.635937] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:50.796 [2024-06-11 12:08:03.635984] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.796 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.796 [2024-06-11 12:08:03.718486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.796 [2024-06-11 12:08:03.753634] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:50.796 [2024-06-11 12:08:03.753788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.796 [2024-06-11 12:08:03.753798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.796 [2024-06-11 12:08:03.753807] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.796 [2024-06-11 12:08:03.753942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.796 [2024-06-11 12:08:03.754116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.796 [2024-06-11 12:08:03.754289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.736 12:08:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:51.736 12:08:04 -- common/autotest_common.sh@852 -- # return 0 00:13:51.736 12:08:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:51.736 12:08:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:51.736 12:08:04 -- common/autotest_common.sh@10 -- # set +x 00:13:51.736 12:08:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.736 12:08:04 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:51.736 12:08:04 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:51.736 [2024-06-11 12:08:04.579749] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.736 12:08:04 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:51.996 12:08:04 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.996 [2024-06-11 12:08:04.909232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.996 12:08:04 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:52.255 12:08:05 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:52.255 Malloc0 00:13:52.255 12:08:05 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:52.515 Delay0 00:13:52.515 12:08:05 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.774 12:08:05 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:52.774 NULL1 00:13:52.775 12:08:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:53.035 12:08:05 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:53.035 12:08:05 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1393624 00:13:53.035 12:08:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:53.035 12:08:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.035 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.295 12:08:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.295 12:08:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:53.295 12:08:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:53.555 [2024-06-11 12:08:06.380132] bdev.c:4968:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:13:53.555 true 00:13:53.555 12:08:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:53.555 12:08:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.555 12:08:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.814 12:08:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:53.814 12:08:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:54.074 true 00:13:54.074 12:08:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:54.074 12:08:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.074 12:08:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.333 12:08:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:54.334 12:08:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:54.334 true 00:13:54.594 12:08:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:54.594 12:08:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.594 12:08:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.854 12:08:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:54.854 12:08:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:54.854 true 00:13:54.854 12:08:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:54.854 12:08:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.114 12:08:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.373 12:08:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:55.373 12:08:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:55.373 true 00:13:55.373 12:08:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:55.373 12:08:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.633 12:08:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.633 12:08:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:55.633 12:08:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:55.892 true 00:13:55.892 12:08:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:55.892 12:08:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.153 12:08:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.153 12:08:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:56.153 12:08:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:56.414 true 00:13:56.414 12:08:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:56.414 12:08:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.414 12:08:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.675 12:08:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:56.675 12:08:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:56.936 true 00:13:56.936 12:08:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:56.936 12:08:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.936 12:08:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.196 12:08:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:57.196 12:08:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:57.457 true 00:13:57.457 12:08:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:57.457 12:08:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.457 12:08:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.717 12:08:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:57.717 12:08:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:57.717 true 00:13:57.717 12:08:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:57.717 12:08:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.978 12:08:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.237 12:08:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:58.237 12:08:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:58.237 true 00:13:58.237 12:08:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:58.237 12:08:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.498 12:08:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.759 12:08:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:58.759 12:08:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:58.759 true 00:13:58.759 12:08:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:58.759 12:08:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.019 12:08:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.019 12:08:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:59.019 12:08:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:59.279 true 00:13:59.279 12:08:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:59.279 12:08:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.538 12:08:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.538 12:08:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:59.538 12:08:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:59.798 true 00:13:59.798 12:08:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:13:59.798 12:08:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.059 12:08:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.059 12:08:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:00.059 12:08:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:00.319 true 00:14:00.319 12:08:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:00.319 12:08:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.581 12:08:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.581 12:08:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:00.581 12:08:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:00.841 true 00:14:00.842 12:08:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:00.842 12:08:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.842 12:08:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.102 12:08:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:01.102 12:08:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:01.363 true 00:14:01.363 12:08:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:01.363 12:08:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.363 12:08:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.623 12:08:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:01.623 12:08:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:01.623 true 00:14:01.883 12:08:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:01.883 12:08:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.883 12:08:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.143 12:08:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:02.143 12:08:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:02.143 true 00:14:02.403 12:08:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:02.403 12:08:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.403 12:08:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.663 12:08:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:02.663 12:08:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:02.663 true 00:14:02.663 12:08:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:02.663 12:08:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.923 12:08:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.184 12:08:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:03.184 12:08:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:03.184 true 00:14:03.184 12:08:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:03.184 12:08:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.444 12:08:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.704 12:08:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:03.704 12:08:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:03.704 true 00:14:03.704 12:08:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:03.704 12:08:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.965 12:08:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.225 12:08:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:04.225 12:08:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:04.225 true 00:14:04.225 12:08:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:04.225 12:08:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.486 12:08:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.486 12:08:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:04.486 12:08:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:04.747 true 00:14:04.747 12:08:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:04.747 12:08:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.008 12:08:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.008 12:08:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:05.008 12:08:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:05.269 true 00:14:05.269 12:08:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:05.269 12:08:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.530 12:08:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.530 12:08:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:05.530 12:08:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:05.790 true 00:14:05.790 12:08:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:05.790 12:08:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.790 12:08:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.051 12:08:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:06.051 12:08:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:06.311 true 00:14:06.311 12:08:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:06.311 12:08:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.311 12:08:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.571 12:08:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:06.571 12:08:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:06.571 true 00:14:06.571 12:08:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:06.571 12:08:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.832 12:08:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.095 12:08:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:07.095 12:08:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:07.095 true 00:14:07.095 12:08:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:07.096 12:08:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.387 12:08:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.672 12:08:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:07.672 12:08:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:07.672 true 00:14:07.672 12:08:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:07.672 12:08:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.932 12:08:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.932 12:08:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:07.933 12:08:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:08.193 true 00:14:08.193 12:08:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:08.193 12:08:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.455 12:08:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.455 12:08:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:08.455 12:08:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:08.716 true 00:14:08.716 12:08:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:08.716 12:08:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.716 12:08:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.976 12:08:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:08.976 12:08:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:09.236 true 00:14:09.236 12:08:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:09.236 12:08:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.236 12:08:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.496 12:08:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:09.496 12:08:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:09.757 true 00:14:09.757 12:08:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:09.757 12:08:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.757 12:08:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.019 12:08:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:10.019 12:08:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:10.019 true 00:14:10.280 12:08:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:10.280 12:08:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.280 12:08:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.540 12:08:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:10.540 12:08:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:10.540 true 00:14:10.540 12:08:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:10.540 12:08:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.801 12:08:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.061 12:08:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:11.061 12:08:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:11.061 true 00:14:11.061 12:08:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:11.061 12:08:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.322 12:08:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.322 12:08:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:11.322 12:08:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:11.582 true 00:14:11.582 12:08:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:11.582 12:08:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.844 12:08:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.844 12:08:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:11.844 12:08:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:12.105 true 00:14:12.105 12:08:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:12.105 12:08:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.365 12:08:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.365 12:08:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:12.365 12:08:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:12.625 true 00:14:12.625 12:08:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:12.625 12:08:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.625 12:08:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.885 12:08:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:12.885 12:08:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:13.145 true 00:14:13.145 12:08:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:13.145 12:08:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.145 12:08:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.406 12:08:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:13.406 12:08:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:13.406 true 00:14:13.666 12:08:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:13.666 12:08:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.666 12:08:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.926 12:08:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:13.926 12:08:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:13.926 true 00:14:13.926 12:08:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:13.927 12:08:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.187 12:08:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.447 12:08:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:14.447 12:08:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:14.447 true 00:14:14.447 12:08:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:14.447 12:08:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.706 12:08:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.966 12:08:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:14.966 12:08:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:14.966 true 00:14:14.966 12:08:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:14.966 12:08:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.226 12:08:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.226 12:08:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:15.226 12:08:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:15.486 true 00:14:15.486 12:08:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:15.486 12:08:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.747 12:08:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.747 12:08:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:15.747 12:08:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:16.008 true 00:14:16.008 12:08:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:16.008 12:08:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.268 12:08:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.268 12:08:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:16.268 12:08:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:16.528 true 00:14:16.528 12:08:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:16.528 12:08:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.528 12:08:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.789 12:08:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:16.789 12:08:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:17.049 true 00:14:17.049 12:08:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:17.049 12:08:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.049 12:08:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.310 12:08:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:17.310 12:08:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:17.310 true 00:14:17.570 12:08:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:17.570 12:08:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.570 12:08:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.830 12:08:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:17.830 12:08:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:17.830 true 00:14:17.830 12:08:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:17.830 12:08:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.090 12:08:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.350 12:08:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:18.350 12:08:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:18.350 true 00:14:18.350 12:08:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:18.350 12:08:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.610 12:08:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.871 12:08:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:18.872 12:08:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:18.872 true 00:14:18.872 12:08:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:18.872 12:08:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.132 12:08:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.132 12:08:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:19.132 12:08:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:19.393 true 00:14:19.393 12:08:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:19.393 12:08:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.653 12:08:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.653 12:08:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:19.653 12:08:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:19.913 true 00:14:19.913 12:08:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:19.913 12:08:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.174 12:08:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.174 12:08:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:20.174 12:08:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:20.435 true 00:14:20.435 12:08:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:20.435 12:08:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.435 12:08:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.695 12:08:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:20.695 12:08:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:20.954 true 00:14:20.954 12:08:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:20.954 12:08:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.954 12:08:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.214 12:08:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:21.214 12:08:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:21.474 true 00:14:21.474 12:08:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:21.474 12:08:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.474 12:08:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.734 12:08:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:21.734 12:08:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:21.734 true 00:14:21.994 12:08:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:21.994 12:08:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.994 12:08:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.254 12:08:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:22.254 12:08:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:22.254 true 00:14:22.254 12:08:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:22.254 12:08:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.514 12:08:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.775 12:08:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:14:22.775 12:08:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:14:22.775 true 00:14:22.775 12:08:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:22.775 12:08:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.036 12:08:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.296 12:08:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:14:23.296 12:08:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:14:23.296 Initializing NVMe Controllers 00:14:23.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.296 Controller IO queue size 128, less than required. 00:14:23.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:23.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:23.296 Initialization complete. Launching workers. 00:14:23.296 ======================================================== 00:14:23.296 Latency(us) 00:14:23.296 Device Information : IOPS MiB/s Average min max 00:14:23.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32730.67 15.98 3910.67 1379.63 9786.78 00:14:23.296 ======================================================== 00:14:23.296 Total : 32730.67 15.98 3910.67 1379.63 9786.78 00:14:23.296 00:14:23.296 true 00:14:23.296 12:08:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1393624 00:14:23.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1393624) - No such process 00:14:23.296 12:08:36 -- target/ns_hotplug_stress.sh@53 -- # wait 1393624 00:14:23.296 12:08:36 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.557 12:08:36 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.557 12:08:36 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:23.557 12:08:36 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:23.557 12:08:36 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:23.557 12:08:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.557 12:08:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:23.818 null0 00:14:23.818 12:08:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:23.818 12:08:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:23.818 12:08:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:24.078 null1 00:14:24.078 12:08:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.078 12:08:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.078 12:08:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:24.078 null2 00:14:24.079 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.079 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.079 12:08:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:24.339 null3 00:14:24.339 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.339 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.339 12:08:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:24.599 null4 00:14:24.599 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.599 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.599 12:08:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:24.599 null5 00:14:24.599 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.599 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.599 12:08:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:24.859 null6 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:24.859 null7 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.859 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:25.120 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.120 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:25.120 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:25.120 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@66 -- # wait 1400208 1400209 1400211 1400213 1400215 1400217 1400219 1400221 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.121 12:08:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.121 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.382 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.643 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.903 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.903 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.904 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.164 12:08:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.164 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.423 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.424 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.683 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.943 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.943 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.943 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.943 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.943 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.943 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.943 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.944 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.203 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.203 12:08:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.203 12:08:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.203 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.463 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.723 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.982 12:08:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.242 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:28.242 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.242 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.243 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:28.502 12:08:41 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:28.502 12:08:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:28.502 12:08:41 -- nvmf/common.sh@116 -- # sync 00:14:28.502 12:08:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:28.502 12:08:41 -- nvmf/common.sh@119 -- # set +e 00:14:28.502 12:08:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:28.502 12:08:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:28.502 rmmod nvme_tcp 00:14:28.502 rmmod nvme_fabrics 00:14:28.502 rmmod nvme_keyring 00:14:28.502 12:08:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:28.502 12:08:41 -- nvmf/common.sh@123 -- # set -e 00:14:28.502 12:08:41 -- nvmf/common.sh@124 -- # return 0 00:14:28.502 12:08:41 -- nvmf/common.sh@477 -- # '[' -n 1393221 ']' 00:14:28.502 12:08:41 -- nvmf/common.sh@478 -- # killprocess 1393221 00:14:28.502 12:08:41 -- common/autotest_common.sh@926 -- # '[' -z 1393221 ']' 00:14:28.502 12:08:41 -- common/autotest_common.sh@930 -- # kill -0 1393221 00:14:28.502 12:08:41 -- common/autotest_common.sh@931 -- # uname 00:14:28.502 12:08:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:28.502 12:08:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1393221 00:14:28.502 12:08:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:28.502 12:08:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:28.502 12:08:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1393221' 00:14:28.502 killing process with pid 1393221 00:14:28.502 12:08:41 -- common/autotest_common.sh@945 -- # kill 1393221 00:14:28.502 12:08:41 -- common/autotest_common.sh@950 -- # wait 1393221 00:14:28.762 12:08:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:28.762 12:08:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:28.762 12:08:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:28.762 12:08:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.762 12:08:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:28.762 12:08:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.762 12:08:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.762 12:08:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.323 12:08:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:31.323 00:14:31.323 real 0m47.347s 00:14:31.323 user 3m13.902s 00:14:31.323 sys 0m16.825s 00:14:31.323 12:08:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.323 12:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:31.323 ************************************ 00:14:31.323 END TEST nvmf_ns_hotplug_stress 00:14:31.323 ************************************ 00:14:31.323 12:08:43 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:31.323 12:08:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:31.323 12:08:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:31.323 12:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:31.323 ************************************ 00:14:31.323 START TEST nvmf_connect_stress 00:14:31.323 ************************************ 00:14:31.323 12:08:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:31.323 * Looking for test storage... 00:14:31.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.324 12:08:43 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.324 12:08:43 -- nvmf/common.sh@7 -- # uname -s 00:14:31.324 12:08:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.324 12:08:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.324 12:08:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.324 12:08:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.324 12:08:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.324 12:08:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.324 12:08:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.324 12:08:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.324 12:08:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.324 12:08:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.324 12:08:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:31.324 12:08:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:31.324 12:08:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.324 12:08:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.324 12:08:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.324 12:08:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.324 12:08:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.324 12:08:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.324 12:08:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.324 12:08:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.324 12:08:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.324 12:08:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.324 12:08:43 -- paths/export.sh@5 -- # export PATH 00:14:31.324 12:08:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.324 12:08:43 -- nvmf/common.sh@46 -- # : 0 00:14:31.324 12:08:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:31.324 12:08:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:31.324 12:08:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:31.324 12:08:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.324 12:08:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.324 12:08:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:31.324 12:08:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:31.324 12:08:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:31.324 12:08:43 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:31.324 12:08:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:31.324 12:08:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.324 12:08:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:31.324 12:08:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:31.324 12:08:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:31.324 12:08:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.324 12:08:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.324 12:08:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.324 12:08:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:31.324 12:08:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:31.324 12:08:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:31.324 12:08:43 -- common/autotest_common.sh@10 -- # set +x 00:14:37.959 12:08:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:37.959 12:08:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:37.959 12:08:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:37.960 12:08:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:37.960 12:08:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:37.960 12:08:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:37.960 12:08:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:37.960 12:08:50 -- nvmf/common.sh@294 -- # net_devs=() 00:14:37.960 12:08:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:37.960 12:08:50 -- nvmf/common.sh@295 -- # e810=() 00:14:37.960 12:08:50 -- nvmf/common.sh@295 -- # local -ga e810 00:14:37.960 12:08:50 -- nvmf/common.sh@296 -- # x722=() 00:14:37.960 12:08:50 -- nvmf/common.sh@296 -- # local -ga x722 00:14:37.960 12:08:50 -- nvmf/common.sh@297 -- # mlx=() 00:14:37.960 12:08:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:37.960 12:08:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.960 12:08:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:37.960 12:08:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:37.960 12:08:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:37.960 12:08:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.960 12:08:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:37.960 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:37.960 12:08:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:37.960 12:08:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:37.960 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:37.960 12:08:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:37.960 12:08:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.960 12:08:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.960 12:08:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.960 12:08:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.960 12:08:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:37.960 Found net devices under 0000:31:00.0: cvl_0_0 00:14:37.960 12:08:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.960 12:08:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:37.960 12:08:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.960 12:08:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:37.960 12:08:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.960 12:08:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:37.960 Found net devices under 0000:31:00.1: cvl_0_1 00:14:37.960 12:08:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.960 12:08:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:37.960 12:08:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:37.960 12:08:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:37.960 12:08:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.960 12:08:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.960 12:08:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.960 12:08:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:37.960 12:08:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.960 12:08:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.960 12:08:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:37.960 12:08:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.960 12:08:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.960 12:08:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:37.960 12:08:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:37.960 12:08:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.960 12:08:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.960 12:08:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.960 12:08:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.960 12:08:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:37.960 12:08:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.960 12:08:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.960 12:08:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.960 12:08:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:37.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:14:37.960 00:14:37.960 --- 10.0.0.2 ping statistics --- 00:14:37.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.960 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:14:37.960 12:08:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:14:37.960 00:14:37.960 --- 10.0.0.1 ping statistics --- 00:14:37.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.960 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:14:37.960 12:08:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.960 12:08:50 -- nvmf/common.sh@410 -- # return 0 00:14:37.960 12:08:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:37.960 12:08:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.960 12:08:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:37.960 12:08:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.960 12:08:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:37.960 12:08:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:37.960 12:08:50 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:37.960 12:08:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:37.960 12:08:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:37.960 12:08:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.960 12:08:50 -- nvmf/common.sh@469 -- # nvmfpid=1405458 00:14:37.960 12:08:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:37.960 12:08:50 -- nvmf/common.sh@470 -- # waitforlisten 1405458 00:14:37.960 12:08:50 -- common/autotest_common.sh@819 -- # '[' -z 1405458 ']' 00:14:37.960 12:08:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.960 12:08:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:37.960 12:08:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.960 12:08:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:37.960 12:08:50 -- common/autotest_common.sh@10 -- # set +x 00:14:38.221 [2024-06-11 12:08:51.005244] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:38.221 [2024-06-11 12:08:51.005307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.221 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.221 [2024-06-11 12:08:51.087726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.221 [2024-06-11 12:08:51.114563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:38.221 [2024-06-11 12:08:51.114666] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.221 [2024-06-11 12:08:51.114672] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.221 [2024-06-11 12:08:51.114677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.221 [2024-06-11 12:08:51.114799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.221 [2024-06-11 12:08:51.114954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.221 [2024-06-11 12:08:51.114956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.792 12:08:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:38.792 12:08:51 -- common/autotest_common.sh@852 -- # return 0 00:14:38.792 12:08:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:38.792 12:08:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:38.792 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:39.053 12:08:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.053 12:08:51 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.053 12:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.053 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:39.053 [2024-06-11 12:08:51.848529] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.053 12:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.053 12:08:51 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.053 12:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.053 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:39.053 12:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.053 12:08:51 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.053 12:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.053 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:39.053 [2024-06-11 12:08:51.892165] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.053 12:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.053 12:08:51 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:39.053 12:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.053 12:08:51 -- common/autotest_common.sh@10 -- # set +x 00:14:39.053 NULL1 00:14:39.053 12:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.053 12:08:51 -- target/connect_stress.sh@21 -- # PERF_PID=1405546 00:14:39.053 12:08:51 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:39.053 12:08:51 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:39.053 12:08:51 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:39.053 12:08:51 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:39.053 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.053 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.053 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.053 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.053 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.053 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.053 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.053 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.053 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.053 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:51 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:52 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:52 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:52 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:39.054 12:08:52 -- target/connect_stress.sh@28 -- # cat 00:14:39.054 12:08:52 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:39.054 12:08:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.054 12:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.054 12:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:39.314 12:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.314 12:08:52 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:39.314 12:08:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.314 12:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.314 12:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:39.884 12:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.884 12:08:52 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:39.884 12:08:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.884 12:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.884 12:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:40.144 12:08:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.144 12:08:52 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:40.144 12:08:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.144 12:08:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.144 12:08:52 -- common/autotest_common.sh@10 -- # set +x 00:14:40.404 12:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.404 12:08:53 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:40.404 12:08:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.404 12:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.404 12:08:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.664 12:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.664 12:08:53 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:40.664 12:08:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.664 12:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.664 12:08:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.925 12:08:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.925 12:08:53 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:41.186 12:08:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.186 12:08:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.186 12:08:53 -- common/autotest_common.sh@10 -- # set +x 00:14:41.446 12:08:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.446 12:08:54 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:41.446 12:08:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.446 12:08:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.446 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:14:41.706 12:08:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.706 12:08:54 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:41.706 12:08:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.706 12:08:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.706 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:14:41.967 12:08:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.967 12:08:54 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:41.967 12:08:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.967 12:08:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.967 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:14:42.228 12:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.228 12:08:55 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:42.228 12:08:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.228 12:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.228 12:08:55 -- common/autotest_common.sh@10 -- # set +x 00:14:42.800 12:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.800 12:08:55 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:42.800 12:08:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.800 12:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.800 12:08:55 -- common/autotest_common.sh@10 -- # set +x 00:14:43.061 12:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.061 12:08:55 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:43.061 12:08:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.061 12:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.061 12:08:55 -- common/autotest_common.sh@10 -- # set +x 00:14:43.321 12:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.321 12:08:56 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:43.321 12:08:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.321 12:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.321 12:08:56 -- common/autotest_common.sh@10 -- # set +x 00:14:43.580 12:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.580 12:08:56 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:43.580 12:08:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.580 12:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.580 12:08:56 -- common/autotest_common.sh@10 -- # set +x 00:14:44.152 12:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.152 12:08:56 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:44.152 12:08:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.152 12:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.152 12:08:56 -- common/autotest_common.sh@10 -- # set +x 00:14:44.412 12:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.412 12:08:57 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:44.412 12:08:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.412 12:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.412 12:08:57 -- common/autotest_common.sh@10 -- # set +x 00:14:44.673 12:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.673 12:08:57 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:44.673 12:08:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.673 12:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.673 12:08:57 -- common/autotest_common.sh@10 -- # set +x 00:14:44.933 12:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.933 12:08:57 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:44.933 12:08:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.933 12:08:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.933 12:08:57 -- common/autotest_common.sh@10 -- # set +x 00:14:45.195 12:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.195 12:08:58 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:45.195 12:08:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.195 12:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.195 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:14:45.766 12:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.766 12:08:58 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:45.766 12:08:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.766 12:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.766 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:14:46.027 12:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.027 12:08:58 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:46.027 12:08:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.027 12:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.027 12:08:58 -- common/autotest_common.sh@10 -- # set +x 00:14:46.287 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.287 12:08:59 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:46.287 12:08:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.287 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.287 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.547 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.547 12:08:59 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:46.547 12:08:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.547 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.547 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.809 12:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.809 12:08:59 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:46.809 12:08:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.809 12:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.809 12:08:59 -- common/autotest_common.sh@10 -- # set +x 00:14:47.380 12:09:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.380 12:09:00 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:47.380 12:09:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.380 12:09:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.380 12:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:47.640 12:09:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.640 12:09:00 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:47.640 12:09:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.640 12:09:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.640 12:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:47.901 12:09:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.901 12:09:00 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:47.901 12:09:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.901 12:09:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.901 12:09:00 -- common/autotest_common.sh@10 -- # set +x 00:14:48.162 12:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.162 12:09:01 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:48.162 12:09:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.162 12:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.162 12:09:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.422 12:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.422 12:09:01 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:48.422 12:09:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.422 12:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.422 12:09:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.992 12:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.992 12:09:01 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:48.992 12:09:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.992 12:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.992 12:09:01 -- common/autotest_common.sh@10 -- # set +x 00:14:49.263 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:49.263 12:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.263 12:09:02 -- target/connect_stress.sh@34 -- # kill -0 1405546 00:14:49.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1405546) - No such process 00:14:49.263 12:09:02 -- target/connect_stress.sh@38 -- # wait 1405546 00:14:49.263 12:09:02 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:49.263 12:09:02 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:49.263 12:09:02 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:49.263 12:09:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:49.263 12:09:02 -- nvmf/common.sh@116 -- # sync 00:14:49.263 12:09:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:49.264 12:09:02 -- nvmf/common.sh@119 -- # set +e 00:14:49.264 12:09:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:49.264 12:09:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:49.264 rmmod nvme_tcp 00:14:49.264 rmmod nvme_fabrics 00:14:49.264 rmmod nvme_keyring 00:14:49.264 12:09:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:49.264 12:09:02 -- nvmf/common.sh@123 -- # set -e 00:14:49.264 12:09:02 -- nvmf/common.sh@124 -- # return 0 00:14:49.264 12:09:02 -- nvmf/common.sh@477 -- # '[' -n 1405458 ']' 00:14:49.264 12:09:02 -- nvmf/common.sh@478 -- # killprocess 1405458 00:14:49.264 12:09:02 -- common/autotest_common.sh@926 -- # '[' -z 1405458 ']' 00:14:49.264 12:09:02 -- common/autotest_common.sh@930 -- # kill -0 1405458 00:14:49.264 12:09:02 -- common/autotest_common.sh@931 -- # uname 00:14:49.264 12:09:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:49.264 12:09:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1405458 00:14:49.264 12:09:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:49.264 12:09:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:49.264 12:09:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1405458' 00:14:49.264 killing process with pid 1405458 00:14:49.264 12:09:02 -- common/autotest_common.sh@945 -- # kill 1405458 00:14:49.264 12:09:02 -- common/autotest_common.sh@950 -- # wait 1405458 00:14:49.527 12:09:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:49.527 12:09:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:49.527 12:09:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:49.527 12:09:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.527 12:09:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:49.527 12:09:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.527 12:09:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.527 12:09:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.438 12:09:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:51.438 00:14:51.438 real 0m20.659s 00:14:51.438 user 0m42.429s 00:14:51.438 sys 0m8.446s 00:14:51.438 12:09:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.438 12:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:51.438 ************************************ 00:14:51.438 END TEST nvmf_connect_stress 00:14:51.438 ************************************ 00:14:51.438 12:09:04 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.438 12:09:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:51.438 12:09:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:51.438 12:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:51.438 ************************************ 00:14:51.438 START TEST nvmf_fused_ordering 00:14:51.438 ************************************ 00:14:51.438 12:09:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.699 * Looking for test storage... 00:14:51.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.699 12:09:04 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.699 12:09:04 -- nvmf/common.sh@7 -- # uname -s 00:14:51.699 12:09:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.699 12:09:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.699 12:09:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.699 12:09:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.699 12:09:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.699 12:09:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.699 12:09:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.699 12:09:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.699 12:09:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.699 12:09:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.699 12:09:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.699 12:09:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.699 12:09:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.699 12:09:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.699 12:09:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.699 12:09:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.699 12:09:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.699 12:09:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.699 12:09:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.699 12:09:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.699 12:09:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.699 12:09:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.699 12:09:04 -- paths/export.sh@5 -- # export PATH 00:14:51.699 12:09:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.699 12:09:04 -- nvmf/common.sh@46 -- # : 0 00:14:51.699 12:09:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.699 12:09:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.699 12:09:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.699 12:09:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.699 12:09:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.699 12:09:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.699 12:09:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.699 12:09:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.699 12:09:04 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:51.699 12:09:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:51.699 12:09:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.699 12:09:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.699 12:09:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.699 12:09:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.699 12:09:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.699 12:09:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.699 12:09:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.699 12:09:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:51.699 12:09:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:51.699 12:09:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:51.699 12:09:04 -- common/autotest_common.sh@10 -- # set +x 00:14:59.847 12:09:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:59.847 12:09:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:59.847 12:09:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:59.847 12:09:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:59.847 12:09:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:59.847 12:09:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:59.847 12:09:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:59.847 12:09:11 -- nvmf/common.sh@294 -- # net_devs=() 00:14:59.847 12:09:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:59.847 12:09:11 -- nvmf/common.sh@295 -- # e810=() 00:14:59.847 12:09:11 -- nvmf/common.sh@295 -- # local -ga e810 00:14:59.847 12:09:11 -- nvmf/common.sh@296 -- # x722=() 00:14:59.847 12:09:11 -- nvmf/common.sh@296 -- # local -ga x722 00:14:59.847 12:09:11 -- nvmf/common.sh@297 -- # mlx=() 00:14:59.847 12:09:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:59.847 12:09:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.847 12:09:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:59.848 12:09:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:59.848 12:09:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:59.848 12:09:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:59.848 12:09:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:59.848 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:59.848 12:09:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:59.848 12:09:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:59.848 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:59.848 12:09:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:59.848 12:09:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:59.848 12:09:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.848 12:09:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:59.848 12:09:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.848 12:09:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:59.848 Found net devices under 0000:31:00.0: cvl_0_0 00:14:59.848 12:09:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.848 12:09:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:59.848 12:09:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.848 12:09:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:59.848 12:09:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.848 12:09:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:59.848 Found net devices under 0000:31:00.1: cvl_0_1 00:14:59.848 12:09:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.848 12:09:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:59.848 12:09:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:59.848 12:09:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:59.848 12:09:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.848 12:09:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.848 12:09:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.848 12:09:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:59.848 12:09:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.848 12:09:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.848 12:09:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:59.848 12:09:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.848 12:09:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.848 12:09:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:59.848 12:09:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:59.848 12:09:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.848 12:09:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.848 12:09:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.848 12:09:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.848 12:09:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:59.848 12:09:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.848 12:09:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.848 12:09:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.848 12:09:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:59.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:14:59.848 00:14:59.848 --- 10.0.0.2 ping statistics --- 00:14:59.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.848 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:14:59.848 12:09:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:14:59.848 00:14:59.848 --- 10.0.0.1 ping statistics --- 00:14:59.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.848 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:14:59.848 12:09:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.848 12:09:11 -- nvmf/common.sh@410 -- # return 0 00:14:59.848 12:09:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:59.848 12:09:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.848 12:09:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:59.848 12:09:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.848 12:09:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:59.848 12:09:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:59.848 12:09:11 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:59.848 12:09:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:59.848 12:09:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:59.848 12:09:11 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 12:09:11 -- nvmf/common.sh@469 -- # nvmfpid=1412502 00:14:59.848 12:09:11 -- nvmf/common.sh@470 -- # waitforlisten 1412502 00:14:59.848 12:09:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.848 12:09:11 -- common/autotest_common.sh@819 -- # '[' -z 1412502 ']' 00:14:59.848 12:09:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.848 12:09:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:59.848 12:09:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.848 12:09:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:59.848 12:09:11 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 [2024-06-11 12:09:11.905762] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:59.848 [2024-06-11 12:09:11.905824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.848 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.848 [2024-06-11 12:09:11.995859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.848 [2024-06-11 12:09:12.039875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.848 [2024-06-11 12:09:12.040024] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.848 [2024-06-11 12:09:12.040033] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.848 [2024-06-11 12:09:12.040041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.848 [2024-06-11 12:09:12.040067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.848 12:09:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:59.848 12:09:12 -- common/autotest_common.sh@852 -- # return 0 00:14:59.848 12:09:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.848 12:09:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:59.848 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 12:09:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.848 12:09:12 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.848 12:09:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.848 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 [2024-06-11 12:09:12.728859] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.848 12:09:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.848 12:09:12 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:59.848 12:09:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.848 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 12:09:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.848 12:09:12 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.848 12:09:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.848 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 [2024-06-11 12:09:12.753100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.848 12:09:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.848 12:09:12 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:59.848 12:09:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.848 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 NULL1 00:14:59.848 12:09:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.848 12:09:12 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:59.848 12:09:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.848 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 12:09:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.848 12:09:12 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:59.848 12:09:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.848 12:09:12 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 12:09:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.849 12:09:12 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:59.849 [2024-06-11 12:09:12.821303] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:59.849 [2024-06-11 12:09:12.821363] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412649 ] 00:14:59.849 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.421 Attached to nqn.2016-06.io.spdk:cnode1 00:15:00.421 Namespace ID: 1 size: 1GB 00:15:00.421 fused_ordering(0) 00:15:00.421 fused_ordering(1) 00:15:00.421 fused_ordering(2) 00:15:00.421 fused_ordering(3) 00:15:00.421 fused_ordering(4) 00:15:00.421 fused_ordering(5) 00:15:00.421 fused_ordering(6) 00:15:00.421 fused_ordering(7) 00:15:00.421 fused_ordering(8) 00:15:00.421 fused_ordering(9) 00:15:00.421 fused_ordering(10) 00:15:00.421 fused_ordering(11) 00:15:00.421 fused_ordering(12) 00:15:00.421 fused_ordering(13) 00:15:00.421 fused_ordering(14) 00:15:00.421 fused_ordering(15) 00:15:00.421 fused_ordering(16) 00:15:00.421 fused_ordering(17) 00:15:00.421 fused_ordering(18) 00:15:00.421 fused_ordering(19) 00:15:00.421 fused_ordering(20) 00:15:00.421 fused_ordering(21) 00:15:00.421 fused_ordering(22) 00:15:00.421 fused_ordering(23) 00:15:00.421 fused_ordering(24) 00:15:00.421 fused_ordering(25) 00:15:00.421 fused_ordering(26) 00:15:00.421 fused_ordering(27) 00:15:00.421 fused_ordering(28) 00:15:00.421 fused_ordering(29) 00:15:00.421 fused_ordering(30) 00:15:00.421 fused_ordering(31) 00:15:00.421 fused_ordering(32) 00:15:00.421 fused_ordering(33) 00:15:00.421 fused_ordering(34) 00:15:00.421 fused_ordering(35) 00:15:00.421 fused_ordering(36) 00:15:00.421 fused_ordering(37) 00:15:00.421 fused_ordering(38) 00:15:00.421 fused_ordering(39) 00:15:00.421 fused_ordering(40) 00:15:00.421 fused_ordering(41) 00:15:00.421 fused_ordering(42) 00:15:00.421 fused_ordering(43) 00:15:00.421 fused_ordering(44) 00:15:00.421 fused_ordering(45) 00:15:00.421 fused_ordering(46) 00:15:00.421 fused_ordering(47) 00:15:00.421 fused_ordering(48) 00:15:00.421 fused_ordering(49) 00:15:00.421 fused_ordering(50) 00:15:00.421 fused_ordering(51) 00:15:00.421 fused_ordering(52) 00:15:00.421 fused_ordering(53) 00:15:00.421 fused_ordering(54) 00:15:00.421 fused_ordering(55) 00:15:00.421 fused_ordering(56) 00:15:00.421 fused_ordering(57) 00:15:00.421 fused_ordering(58) 00:15:00.421 fused_ordering(59) 00:15:00.421 fused_ordering(60) 00:15:00.421 fused_ordering(61) 00:15:00.421 fused_ordering(62) 00:15:00.421 fused_ordering(63) 00:15:00.421 fused_ordering(64) 00:15:00.421 fused_ordering(65) 00:15:00.421 fused_ordering(66) 00:15:00.421 fused_ordering(67) 00:15:00.421 fused_ordering(68) 00:15:00.421 fused_ordering(69) 00:15:00.421 fused_ordering(70) 00:15:00.421 fused_ordering(71) 00:15:00.421 fused_ordering(72) 00:15:00.421 fused_ordering(73) 00:15:00.421 fused_ordering(74) 00:15:00.421 fused_ordering(75) 00:15:00.421 fused_ordering(76) 00:15:00.421 fused_ordering(77) 00:15:00.421 fused_ordering(78) 00:15:00.421 fused_ordering(79) 00:15:00.421 fused_ordering(80) 00:15:00.421 fused_ordering(81) 00:15:00.421 fused_ordering(82) 00:15:00.421 fused_ordering(83) 00:15:00.421 fused_ordering(84) 00:15:00.421 fused_ordering(85) 00:15:00.421 fused_ordering(86) 00:15:00.421 fused_ordering(87) 00:15:00.421 fused_ordering(88) 00:15:00.421 fused_ordering(89) 00:15:00.421 fused_ordering(90) 00:15:00.421 fused_ordering(91) 00:15:00.421 fused_ordering(92) 00:15:00.421 fused_ordering(93) 00:15:00.421 fused_ordering(94) 00:15:00.421 fused_ordering(95) 00:15:00.421 fused_ordering(96) 00:15:00.421 fused_ordering(97) 00:15:00.421 fused_ordering(98) 00:15:00.421 fused_ordering(99) 00:15:00.421 fused_ordering(100) 00:15:00.421 fused_ordering(101) 00:15:00.421 fused_ordering(102) 00:15:00.421 fused_ordering(103) 00:15:00.421 fused_ordering(104) 00:15:00.421 fused_ordering(105) 00:15:00.421 fused_ordering(106) 00:15:00.421 fused_ordering(107) 00:15:00.421 fused_ordering(108) 00:15:00.421 fused_ordering(109) 00:15:00.421 fused_ordering(110) 00:15:00.421 fused_ordering(111) 00:15:00.421 fused_ordering(112) 00:15:00.421 fused_ordering(113) 00:15:00.421 fused_ordering(114) 00:15:00.421 fused_ordering(115) 00:15:00.421 fused_ordering(116) 00:15:00.421 fused_ordering(117) 00:15:00.421 fused_ordering(118) 00:15:00.421 fused_ordering(119) 00:15:00.421 fused_ordering(120) 00:15:00.421 fused_ordering(121) 00:15:00.421 fused_ordering(122) 00:15:00.421 fused_ordering(123) 00:15:00.421 fused_ordering(124) 00:15:00.421 fused_ordering(125) 00:15:00.421 fused_ordering(126) 00:15:00.421 fused_ordering(127) 00:15:00.421 fused_ordering(128) 00:15:00.421 fused_ordering(129) 00:15:00.421 fused_ordering(130) 00:15:00.421 fused_ordering(131) 00:15:00.421 fused_ordering(132) 00:15:00.421 fused_ordering(133) 00:15:00.421 fused_ordering(134) 00:15:00.421 fused_ordering(135) 00:15:00.421 fused_ordering(136) 00:15:00.421 fused_ordering(137) 00:15:00.421 fused_ordering(138) 00:15:00.421 fused_ordering(139) 00:15:00.421 fused_ordering(140) 00:15:00.421 fused_ordering(141) 00:15:00.421 fused_ordering(142) 00:15:00.421 fused_ordering(143) 00:15:00.421 fused_ordering(144) 00:15:00.421 fused_ordering(145) 00:15:00.421 fused_ordering(146) 00:15:00.421 fused_ordering(147) 00:15:00.421 fused_ordering(148) 00:15:00.421 fused_ordering(149) 00:15:00.421 fused_ordering(150) 00:15:00.421 fused_ordering(151) 00:15:00.421 fused_ordering(152) 00:15:00.421 fused_ordering(153) 00:15:00.421 fused_ordering(154) 00:15:00.421 fused_ordering(155) 00:15:00.421 fused_ordering(156) 00:15:00.421 fused_ordering(157) 00:15:00.421 fused_ordering(158) 00:15:00.421 fused_ordering(159) 00:15:00.421 fused_ordering(160) 00:15:00.421 fused_ordering(161) 00:15:00.421 fused_ordering(162) 00:15:00.421 fused_ordering(163) 00:15:00.421 fused_ordering(164) 00:15:00.421 fused_ordering(165) 00:15:00.421 fused_ordering(166) 00:15:00.421 fused_ordering(167) 00:15:00.421 fused_ordering(168) 00:15:00.421 fused_ordering(169) 00:15:00.421 fused_ordering(170) 00:15:00.421 fused_ordering(171) 00:15:00.421 fused_ordering(172) 00:15:00.421 fused_ordering(173) 00:15:00.421 fused_ordering(174) 00:15:00.421 fused_ordering(175) 00:15:00.421 fused_ordering(176) 00:15:00.421 fused_ordering(177) 00:15:00.421 fused_ordering(178) 00:15:00.421 fused_ordering(179) 00:15:00.421 fused_ordering(180) 00:15:00.421 fused_ordering(181) 00:15:00.421 fused_ordering(182) 00:15:00.421 fused_ordering(183) 00:15:00.421 fused_ordering(184) 00:15:00.421 fused_ordering(185) 00:15:00.421 fused_ordering(186) 00:15:00.421 fused_ordering(187) 00:15:00.421 fused_ordering(188) 00:15:00.421 fused_ordering(189) 00:15:00.421 fused_ordering(190) 00:15:00.421 fused_ordering(191) 00:15:00.421 fused_ordering(192) 00:15:00.421 fused_ordering(193) 00:15:00.421 fused_ordering(194) 00:15:00.421 fused_ordering(195) 00:15:00.421 fused_ordering(196) 00:15:00.421 fused_ordering(197) 00:15:00.421 fused_ordering(198) 00:15:00.421 fused_ordering(199) 00:15:00.421 fused_ordering(200) 00:15:00.421 fused_ordering(201) 00:15:00.421 fused_ordering(202) 00:15:00.421 fused_ordering(203) 00:15:00.421 fused_ordering(204) 00:15:00.421 fused_ordering(205) 00:15:00.421 fused_ordering(206) 00:15:00.421 fused_ordering(207) 00:15:00.421 fused_ordering(208) 00:15:00.421 fused_ordering(209) 00:15:00.421 fused_ordering(210) 00:15:00.421 fused_ordering(211) 00:15:00.421 fused_ordering(212) 00:15:00.421 fused_ordering(213) 00:15:00.421 fused_ordering(214) 00:15:00.421 fused_ordering(215) 00:15:00.421 fused_ordering(216) 00:15:00.421 fused_ordering(217) 00:15:00.421 fused_ordering(218) 00:15:00.421 fused_ordering(219) 00:15:00.421 fused_ordering(220) 00:15:00.421 fused_ordering(221) 00:15:00.421 fused_ordering(222) 00:15:00.421 fused_ordering(223) 00:15:00.421 fused_ordering(224) 00:15:00.421 fused_ordering(225) 00:15:00.421 fused_ordering(226) 00:15:00.421 fused_ordering(227) 00:15:00.421 fused_ordering(228) 00:15:00.421 fused_ordering(229) 00:15:00.421 fused_ordering(230) 00:15:00.421 fused_ordering(231) 00:15:00.421 fused_ordering(232) 00:15:00.422 fused_ordering(233) 00:15:00.422 fused_ordering(234) 00:15:00.422 fused_ordering(235) 00:15:00.422 fused_ordering(236) 00:15:00.422 fused_ordering(237) 00:15:00.422 fused_ordering(238) 00:15:00.422 fused_ordering(239) 00:15:00.422 fused_ordering(240) 00:15:00.422 fused_ordering(241) 00:15:00.422 fused_ordering(242) 00:15:00.422 fused_ordering(243) 00:15:00.422 fused_ordering(244) 00:15:00.422 fused_ordering(245) 00:15:00.422 fused_ordering(246) 00:15:00.422 fused_ordering(247) 00:15:00.422 fused_ordering(248) 00:15:00.422 fused_ordering(249) 00:15:00.422 fused_ordering(250) 00:15:00.422 fused_ordering(251) 00:15:00.422 fused_ordering(252) 00:15:00.422 fused_ordering(253) 00:15:00.422 fused_ordering(254) 00:15:00.422 fused_ordering(255) 00:15:00.422 fused_ordering(256) 00:15:00.422 fused_ordering(257) 00:15:00.422 fused_ordering(258) 00:15:00.422 fused_ordering(259) 00:15:00.422 fused_ordering(260) 00:15:00.422 fused_ordering(261) 00:15:00.422 fused_ordering(262) 00:15:00.422 fused_ordering(263) 00:15:00.422 fused_ordering(264) 00:15:00.422 fused_ordering(265) 00:15:00.422 fused_ordering(266) 00:15:00.422 fused_ordering(267) 00:15:00.422 fused_ordering(268) 00:15:00.422 fused_ordering(269) 00:15:00.422 fused_ordering(270) 00:15:00.422 fused_ordering(271) 00:15:00.422 fused_ordering(272) 00:15:00.422 fused_ordering(273) 00:15:00.422 fused_ordering(274) 00:15:00.422 fused_ordering(275) 00:15:00.422 fused_ordering(276) 00:15:00.422 fused_ordering(277) 00:15:00.422 fused_ordering(278) 00:15:00.422 fused_ordering(279) 00:15:00.422 fused_ordering(280) 00:15:00.422 fused_ordering(281) 00:15:00.422 fused_ordering(282) 00:15:00.422 fused_ordering(283) 00:15:00.422 fused_ordering(284) 00:15:00.422 fused_ordering(285) 00:15:00.422 fused_ordering(286) 00:15:00.422 fused_ordering(287) 00:15:00.422 fused_ordering(288) 00:15:00.422 fused_ordering(289) 00:15:00.422 fused_ordering(290) 00:15:00.422 fused_ordering(291) 00:15:00.422 fused_ordering(292) 00:15:00.422 fused_ordering(293) 00:15:00.422 fused_ordering(294) 00:15:00.422 fused_ordering(295) 00:15:00.422 fused_ordering(296) 00:15:00.422 fused_ordering(297) 00:15:00.422 fused_ordering(298) 00:15:00.422 fused_ordering(299) 00:15:00.422 fused_ordering(300) 00:15:00.422 fused_ordering(301) 00:15:00.422 fused_ordering(302) 00:15:00.422 fused_ordering(303) 00:15:00.422 fused_ordering(304) 00:15:00.422 fused_ordering(305) 00:15:00.422 fused_ordering(306) 00:15:00.422 fused_ordering(307) 00:15:00.422 fused_ordering(308) 00:15:00.422 fused_ordering(309) 00:15:00.422 fused_ordering(310) 00:15:00.422 fused_ordering(311) 00:15:00.422 fused_ordering(312) 00:15:00.422 fused_ordering(313) 00:15:00.422 fused_ordering(314) 00:15:00.422 fused_ordering(315) 00:15:00.422 fused_ordering(316) 00:15:00.422 fused_ordering(317) 00:15:00.422 fused_ordering(318) 00:15:00.422 fused_ordering(319) 00:15:00.422 fused_ordering(320) 00:15:00.422 fused_ordering(321) 00:15:00.422 fused_ordering(322) 00:15:00.422 fused_ordering(323) 00:15:00.422 fused_ordering(324) 00:15:00.422 fused_ordering(325) 00:15:00.422 fused_ordering(326) 00:15:00.422 fused_ordering(327) 00:15:00.422 fused_ordering(328) 00:15:00.422 fused_ordering(329) 00:15:00.422 fused_ordering(330) 00:15:00.422 fused_ordering(331) 00:15:00.422 fused_ordering(332) 00:15:00.422 fused_ordering(333) 00:15:00.422 fused_ordering(334) 00:15:00.422 fused_ordering(335) 00:15:00.422 fused_ordering(336) 00:15:00.422 fused_ordering(337) 00:15:00.422 fused_ordering(338) 00:15:00.422 fused_ordering(339) 00:15:00.422 fused_ordering(340) 00:15:00.422 fused_ordering(341) 00:15:00.422 fused_ordering(342) 00:15:00.422 fused_ordering(343) 00:15:00.422 fused_ordering(344) 00:15:00.422 fused_ordering(345) 00:15:00.422 fused_ordering(346) 00:15:00.422 fused_ordering(347) 00:15:00.422 fused_ordering(348) 00:15:00.422 fused_ordering(349) 00:15:00.422 fused_ordering(350) 00:15:00.422 fused_ordering(351) 00:15:00.422 fused_ordering(352) 00:15:00.422 fused_ordering(353) 00:15:00.422 fused_ordering(354) 00:15:00.422 fused_ordering(355) 00:15:00.422 fused_ordering(356) 00:15:00.422 fused_ordering(357) 00:15:00.422 fused_ordering(358) 00:15:00.422 fused_ordering(359) 00:15:00.422 fused_ordering(360) 00:15:00.422 fused_ordering(361) 00:15:00.422 fused_ordering(362) 00:15:00.422 fused_ordering(363) 00:15:00.422 fused_ordering(364) 00:15:00.422 fused_ordering(365) 00:15:00.422 fused_ordering(366) 00:15:00.422 fused_ordering(367) 00:15:00.422 fused_ordering(368) 00:15:00.422 fused_ordering(369) 00:15:00.422 fused_ordering(370) 00:15:00.422 fused_ordering(371) 00:15:00.422 fused_ordering(372) 00:15:00.422 fused_ordering(373) 00:15:00.422 fused_ordering(374) 00:15:00.422 fused_ordering(375) 00:15:00.422 fused_ordering(376) 00:15:00.422 fused_ordering(377) 00:15:00.422 fused_ordering(378) 00:15:00.422 fused_ordering(379) 00:15:00.422 fused_ordering(380) 00:15:00.422 fused_ordering(381) 00:15:00.422 fused_ordering(382) 00:15:00.422 fused_ordering(383) 00:15:00.422 fused_ordering(384) 00:15:00.422 fused_ordering(385) 00:15:00.422 fused_ordering(386) 00:15:00.422 fused_ordering(387) 00:15:00.422 fused_ordering(388) 00:15:00.422 fused_ordering(389) 00:15:00.422 fused_ordering(390) 00:15:00.422 fused_ordering(391) 00:15:00.422 fused_ordering(392) 00:15:00.422 fused_ordering(393) 00:15:00.422 fused_ordering(394) 00:15:00.422 fused_ordering(395) 00:15:00.422 fused_ordering(396) 00:15:00.422 fused_ordering(397) 00:15:00.422 fused_ordering(398) 00:15:00.422 fused_ordering(399) 00:15:00.422 fused_ordering(400) 00:15:00.422 fused_ordering(401) 00:15:00.422 fused_ordering(402) 00:15:00.422 fused_ordering(403) 00:15:00.422 fused_ordering(404) 00:15:00.422 fused_ordering(405) 00:15:00.422 fused_ordering(406) 00:15:00.422 fused_ordering(407) 00:15:00.422 fused_ordering(408) 00:15:00.422 fused_ordering(409) 00:15:00.422 fused_ordering(410) 00:15:00.994 fused_ordering(411) 00:15:00.994 fused_ordering(412) 00:15:00.994 fused_ordering(413) 00:15:00.994 fused_ordering(414) 00:15:00.994 fused_ordering(415) 00:15:00.994 fused_ordering(416) 00:15:00.994 fused_ordering(417) 00:15:00.994 fused_ordering(418) 00:15:00.994 fused_ordering(419) 00:15:00.994 fused_ordering(420) 00:15:00.994 fused_ordering(421) 00:15:00.994 fused_ordering(422) 00:15:00.994 fused_ordering(423) 00:15:00.994 fused_ordering(424) 00:15:00.994 fused_ordering(425) 00:15:00.994 fused_ordering(426) 00:15:00.994 fused_ordering(427) 00:15:00.994 fused_ordering(428) 00:15:00.994 fused_ordering(429) 00:15:00.994 fused_ordering(430) 00:15:00.994 fused_ordering(431) 00:15:00.994 fused_ordering(432) 00:15:00.994 fused_ordering(433) 00:15:00.994 fused_ordering(434) 00:15:00.994 fused_ordering(435) 00:15:00.994 fused_ordering(436) 00:15:00.994 fused_ordering(437) 00:15:00.994 fused_ordering(438) 00:15:00.994 fused_ordering(439) 00:15:00.994 fused_ordering(440) 00:15:00.994 fused_ordering(441) 00:15:00.994 fused_ordering(442) 00:15:00.994 fused_ordering(443) 00:15:00.994 fused_ordering(444) 00:15:00.994 fused_ordering(445) 00:15:00.994 fused_ordering(446) 00:15:00.994 fused_ordering(447) 00:15:00.994 fused_ordering(448) 00:15:00.994 fused_ordering(449) 00:15:00.994 fused_ordering(450) 00:15:00.994 fused_ordering(451) 00:15:00.994 fused_ordering(452) 00:15:00.994 fused_ordering(453) 00:15:00.994 fused_ordering(454) 00:15:00.994 fused_ordering(455) 00:15:00.994 fused_ordering(456) 00:15:00.994 fused_ordering(457) 00:15:00.994 fused_ordering(458) 00:15:00.994 fused_ordering(459) 00:15:00.994 fused_ordering(460) 00:15:00.994 fused_ordering(461) 00:15:00.994 fused_ordering(462) 00:15:00.994 fused_ordering(463) 00:15:00.994 fused_ordering(464) 00:15:00.994 fused_ordering(465) 00:15:00.994 fused_ordering(466) 00:15:00.994 fused_ordering(467) 00:15:00.994 fused_ordering(468) 00:15:00.994 fused_ordering(469) 00:15:00.994 fused_ordering(470) 00:15:00.994 fused_ordering(471) 00:15:00.994 fused_ordering(472) 00:15:00.994 fused_ordering(473) 00:15:00.994 fused_ordering(474) 00:15:00.994 fused_ordering(475) 00:15:00.994 fused_ordering(476) 00:15:00.994 fused_ordering(477) 00:15:00.994 fused_ordering(478) 00:15:00.994 fused_ordering(479) 00:15:00.994 fused_ordering(480) 00:15:00.994 fused_ordering(481) 00:15:00.994 fused_ordering(482) 00:15:00.994 fused_ordering(483) 00:15:00.994 fused_ordering(484) 00:15:00.994 fused_ordering(485) 00:15:00.994 fused_ordering(486) 00:15:00.994 fused_ordering(487) 00:15:00.994 fused_ordering(488) 00:15:00.994 fused_ordering(489) 00:15:00.994 fused_ordering(490) 00:15:00.994 fused_ordering(491) 00:15:00.994 fused_ordering(492) 00:15:00.994 fused_ordering(493) 00:15:00.994 fused_ordering(494) 00:15:00.994 fused_ordering(495) 00:15:00.994 fused_ordering(496) 00:15:00.994 fused_ordering(497) 00:15:00.994 fused_ordering(498) 00:15:00.994 fused_ordering(499) 00:15:00.994 fused_ordering(500) 00:15:00.994 fused_ordering(501) 00:15:00.994 fused_ordering(502) 00:15:00.994 fused_ordering(503) 00:15:00.994 fused_ordering(504) 00:15:00.994 fused_ordering(505) 00:15:00.994 fused_ordering(506) 00:15:00.994 fused_ordering(507) 00:15:00.994 fused_ordering(508) 00:15:00.994 fused_ordering(509) 00:15:00.994 fused_ordering(510) 00:15:00.994 fused_ordering(511) 00:15:00.994 fused_ordering(512) 00:15:00.994 fused_ordering(513) 00:15:00.994 fused_ordering(514) 00:15:00.994 fused_ordering(515) 00:15:00.994 fused_ordering(516) 00:15:00.994 fused_ordering(517) 00:15:00.994 fused_ordering(518) 00:15:00.994 fused_ordering(519) 00:15:00.994 fused_ordering(520) 00:15:00.994 fused_ordering(521) 00:15:00.994 fused_ordering(522) 00:15:00.994 fused_ordering(523) 00:15:00.994 fused_ordering(524) 00:15:00.994 fused_ordering(525) 00:15:00.994 fused_ordering(526) 00:15:00.994 fused_ordering(527) 00:15:00.994 fused_ordering(528) 00:15:00.994 fused_ordering(529) 00:15:00.994 fused_ordering(530) 00:15:00.994 fused_ordering(531) 00:15:00.994 fused_ordering(532) 00:15:00.994 fused_ordering(533) 00:15:00.994 fused_ordering(534) 00:15:00.994 fused_ordering(535) 00:15:00.994 fused_ordering(536) 00:15:00.994 fused_ordering(537) 00:15:00.994 fused_ordering(538) 00:15:00.994 fused_ordering(539) 00:15:00.994 fused_ordering(540) 00:15:00.994 fused_ordering(541) 00:15:00.994 fused_ordering(542) 00:15:00.994 fused_ordering(543) 00:15:00.994 fused_ordering(544) 00:15:00.994 fused_ordering(545) 00:15:00.994 fused_ordering(546) 00:15:00.994 fused_ordering(547) 00:15:00.994 fused_ordering(548) 00:15:00.994 fused_ordering(549) 00:15:00.994 fused_ordering(550) 00:15:00.994 fused_ordering(551) 00:15:00.994 fused_ordering(552) 00:15:00.994 fused_ordering(553) 00:15:00.994 fused_ordering(554) 00:15:00.994 fused_ordering(555) 00:15:00.994 fused_ordering(556) 00:15:00.994 fused_ordering(557) 00:15:00.994 fused_ordering(558) 00:15:00.994 fused_ordering(559) 00:15:00.994 fused_ordering(560) 00:15:00.994 fused_ordering(561) 00:15:00.994 fused_ordering(562) 00:15:00.994 fused_ordering(563) 00:15:00.994 fused_ordering(564) 00:15:00.994 fused_ordering(565) 00:15:00.994 fused_ordering(566) 00:15:00.994 fused_ordering(567) 00:15:00.994 fused_ordering(568) 00:15:00.994 fused_ordering(569) 00:15:00.994 fused_ordering(570) 00:15:00.994 fused_ordering(571) 00:15:00.994 fused_ordering(572) 00:15:00.994 fused_ordering(573) 00:15:00.994 fused_ordering(574) 00:15:00.994 fused_ordering(575) 00:15:00.994 fused_ordering(576) 00:15:00.995 fused_ordering(577) 00:15:00.995 fused_ordering(578) 00:15:00.995 fused_ordering(579) 00:15:00.995 fused_ordering(580) 00:15:00.995 fused_ordering(581) 00:15:00.995 fused_ordering(582) 00:15:00.995 fused_ordering(583) 00:15:00.995 fused_ordering(584) 00:15:00.995 fused_ordering(585) 00:15:00.995 fused_ordering(586) 00:15:00.995 fused_ordering(587) 00:15:00.995 fused_ordering(588) 00:15:00.995 fused_ordering(589) 00:15:00.995 fused_ordering(590) 00:15:00.995 fused_ordering(591) 00:15:00.995 fused_ordering(592) 00:15:00.995 fused_ordering(593) 00:15:00.995 fused_ordering(594) 00:15:00.995 fused_ordering(595) 00:15:00.995 fused_ordering(596) 00:15:00.995 fused_ordering(597) 00:15:00.995 fused_ordering(598) 00:15:00.995 fused_ordering(599) 00:15:00.995 fused_ordering(600) 00:15:00.995 fused_ordering(601) 00:15:00.995 fused_ordering(602) 00:15:00.995 fused_ordering(603) 00:15:00.995 fused_ordering(604) 00:15:00.995 fused_ordering(605) 00:15:00.995 fused_ordering(606) 00:15:00.995 fused_ordering(607) 00:15:00.995 fused_ordering(608) 00:15:00.995 fused_ordering(609) 00:15:00.995 fused_ordering(610) 00:15:00.995 fused_ordering(611) 00:15:00.995 fused_ordering(612) 00:15:00.995 fused_ordering(613) 00:15:00.995 fused_ordering(614) 00:15:00.995 fused_ordering(615) 00:15:01.256 fused_ordering(616) 00:15:01.256 fused_ordering(617) 00:15:01.256 fused_ordering(618) 00:15:01.256 fused_ordering(619) 00:15:01.256 fused_ordering(620) 00:15:01.256 fused_ordering(621) 00:15:01.256 fused_ordering(622) 00:15:01.256 fused_ordering(623) 00:15:01.256 fused_ordering(624) 00:15:01.256 fused_ordering(625) 00:15:01.256 fused_ordering(626) 00:15:01.256 fused_ordering(627) 00:15:01.256 fused_ordering(628) 00:15:01.256 fused_ordering(629) 00:15:01.256 fused_ordering(630) 00:15:01.256 fused_ordering(631) 00:15:01.256 fused_ordering(632) 00:15:01.256 fused_ordering(633) 00:15:01.256 fused_ordering(634) 00:15:01.256 fused_ordering(635) 00:15:01.256 fused_ordering(636) 00:15:01.256 fused_ordering(637) 00:15:01.256 fused_ordering(638) 00:15:01.256 fused_ordering(639) 00:15:01.256 fused_ordering(640) 00:15:01.256 fused_ordering(641) 00:15:01.256 fused_ordering(642) 00:15:01.256 fused_ordering(643) 00:15:01.256 fused_ordering(644) 00:15:01.256 fused_ordering(645) 00:15:01.256 fused_ordering(646) 00:15:01.256 fused_ordering(647) 00:15:01.256 fused_ordering(648) 00:15:01.256 fused_ordering(649) 00:15:01.256 fused_ordering(650) 00:15:01.256 fused_ordering(651) 00:15:01.256 fused_ordering(652) 00:15:01.256 fused_ordering(653) 00:15:01.256 fused_ordering(654) 00:15:01.256 fused_ordering(655) 00:15:01.256 fused_ordering(656) 00:15:01.256 fused_ordering(657) 00:15:01.256 fused_ordering(658) 00:15:01.256 fused_ordering(659) 00:15:01.256 fused_ordering(660) 00:15:01.256 fused_ordering(661) 00:15:01.256 fused_ordering(662) 00:15:01.256 fused_ordering(663) 00:15:01.256 fused_ordering(664) 00:15:01.256 fused_ordering(665) 00:15:01.256 fused_ordering(666) 00:15:01.256 fused_ordering(667) 00:15:01.256 fused_ordering(668) 00:15:01.256 fused_ordering(669) 00:15:01.256 fused_ordering(670) 00:15:01.256 fused_ordering(671) 00:15:01.256 fused_ordering(672) 00:15:01.256 fused_ordering(673) 00:15:01.256 fused_ordering(674) 00:15:01.256 fused_ordering(675) 00:15:01.256 fused_ordering(676) 00:15:01.256 fused_ordering(677) 00:15:01.256 fused_ordering(678) 00:15:01.256 fused_ordering(679) 00:15:01.256 fused_ordering(680) 00:15:01.256 fused_ordering(681) 00:15:01.256 fused_ordering(682) 00:15:01.256 fused_ordering(683) 00:15:01.256 fused_ordering(684) 00:15:01.256 fused_ordering(685) 00:15:01.256 fused_ordering(686) 00:15:01.256 fused_ordering(687) 00:15:01.256 fused_ordering(688) 00:15:01.256 fused_ordering(689) 00:15:01.256 fused_ordering(690) 00:15:01.256 fused_ordering(691) 00:15:01.256 fused_ordering(692) 00:15:01.256 fused_ordering(693) 00:15:01.256 fused_ordering(694) 00:15:01.256 fused_ordering(695) 00:15:01.256 fused_ordering(696) 00:15:01.256 fused_ordering(697) 00:15:01.256 fused_ordering(698) 00:15:01.256 fused_ordering(699) 00:15:01.256 fused_ordering(700) 00:15:01.256 fused_ordering(701) 00:15:01.256 fused_ordering(702) 00:15:01.256 fused_ordering(703) 00:15:01.256 fused_ordering(704) 00:15:01.256 fused_ordering(705) 00:15:01.256 fused_ordering(706) 00:15:01.256 fused_ordering(707) 00:15:01.256 fused_ordering(708) 00:15:01.256 fused_ordering(709) 00:15:01.256 fused_ordering(710) 00:15:01.256 fused_ordering(711) 00:15:01.256 fused_ordering(712) 00:15:01.256 fused_ordering(713) 00:15:01.256 fused_ordering(714) 00:15:01.256 fused_ordering(715) 00:15:01.256 fused_ordering(716) 00:15:01.256 fused_ordering(717) 00:15:01.256 fused_ordering(718) 00:15:01.256 fused_ordering(719) 00:15:01.256 fused_ordering(720) 00:15:01.256 fused_ordering(721) 00:15:01.256 fused_ordering(722) 00:15:01.256 fused_ordering(723) 00:15:01.256 fused_ordering(724) 00:15:01.256 fused_ordering(725) 00:15:01.256 fused_ordering(726) 00:15:01.256 fused_ordering(727) 00:15:01.256 fused_ordering(728) 00:15:01.256 fused_ordering(729) 00:15:01.256 fused_ordering(730) 00:15:01.256 fused_ordering(731) 00:15:01.256 fused_ordering(732) 00:15:01.256 fused_ordering(733) 00:15:01.256 fused_ordering(734) 00:15:01.256 fused_ordering(735) 00:15:01.256 fused_ordering(736) 00:15:01.256 fused_ordering(737) 00:15:01.256 fused_ordering(738) 00:15:01.256 fused_ordering(739) 00:15:01.256 fused_ordering(740) 00:15:01.256 fused_ordering(741) 00:15:01.256 fused_ordering(742) 00:15:01.256 fused_ordering(743) 00:15:01.256 fused_ordering(744) 00:15:01.256 fused_ordering(745) 00:15:01.256 fused_ordering(746) 00:15:01.256 fused_ordering(747) 00:15:01.256 fused_ordering(748) 00:15:01.256 fused_ordering(749) 00:15:01.256 fused_ordering(750) 00:15:01.256 fused_ordering(751) 00:15:01.256 fused_ordering(752) 00:15:01.256 fused_ordering(753) 00:15:01.256 fused_ordering(754) 00:15:01.256 fused_ordering(755) 00:15:01.256 fused_ordering(756) 00:15:01.256 fused_ordering(757) 00:15:01.256 fused_ordering(758) 00:15:01.256 fused_ordering(759) 00:15:01.256 fused_ordering(760) 00:15:01.256 fused_ordering(761) 00:15:01.256 fused_ordering(762) 00:15:01.256 fused_ordering(763) 00:15:01.256 fused_ordering(764) 00:15:01.256 fused_ordering(765) 00:15:01.256 fused_ordering(766) 00:15:01.256 fused_ordering(767) 00:15:01.256 fused_ordering(768) 00:15:01.256 fused_ordering(769) 00:15:01.256 fused_ordering(770) 00:15:01.256 fused_ordering(771) 00:15:01.256 fused_ordering(772) 00:15:01.256 fused_ordering(773) 00:15:01.256 fused_ordering(774) 00:15:01.256 fused_ordering(775) 00:15:01.256 fused_ordering(776) 00:15:01.256 fused_ordering(777) 00:15:01.256 fused_ordering(778) 00:15:01.256 fused_ordering(779) 00:15:01.256 fused_ordering(780) 00:15:01.256 fused_ordering(781) 00:15:01.256 fused_ordering(782) 00:15:01.256 fused_ordering(783) 00:15:01.256 fused_ordering(784) 00:15:01.256 fused_ordering(785) 00:15:01.257 fused_ordering(786) 00:15:01.257 fused_ordering(787) 00:15:01.257 fused_ordering(788) 00:15:01.257 fused_ordering(789) 00:15:01.257 fused_ordering(790) 00:15:01.257 fused_ordering(791) 00:15:01.257 fused_ordering(792) 00:15:01.257 fused_ordering(793) 00:15:01.257 fused_ordering(794) 00:15:01.257 fused_ordering(795) 00:15:01.257 fused_ordering(796) 00:15:01.257 fused_ordering(797) 00:15:01.257 fused_ordering(798) 00:15:01.257 fused_ordering(799) 00:15:01.257 fused_ordering(800) 00:15:01.257 fused_ordering(801) 00:15:01.257 fused_ordering(802) 00:15:01.257 fused_ordering(803) 00:15:01.257 fused_ordering(804) 00:15:01.257 fused_ordering(805) 00:15:01.257 fused_ordering(806) 00:15:01.257 fused_ordering(807) 00:15:01.257 fused_ordering(808) 00:15:01.257 fused_ordering(809) 00:15:01.257 fused_ordering(810) 00:15:01.257 fused_ordering(811) 00:15:01.257 fused_ordering(812) 00:15:01.257 fused_ordering(813) 00:15:01.257 fused_ordering(814) 00:15:01.257 fused_ordering(815) 00:15:01.257 fused_ordering(816) 00:15:01.257 fused_ordering(817) 00:15:01.257 fused_ordering(818) 00:15:01.257 fused_ordering(819) 00:15:01.257 fused_ordering(820) 00:15:01.830 fused_o[2024-06-11 12:09:14.802968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e4ae0 is same with the state(5) to be set 00:15:01.830 rdering(821) 00:15:01.830 fused_ordering(822) 00:15:01.830 fused_ordering(823) 00:15:01.830 fused_ordering(824) 00:15:01.830 fused_ordering(825) 00:15:01.830 fused_ordering(826) 00:15:01.830 fused_ordering(827) 00:15:01.830 fused_ordering(828) 00:15:01.830 fused_ordering(829) 00:15:01.830 fused_ordering(830) 00:15:01.830 fused_ordering(831) 00:15:01.830 fused_ordering(832) 00:15:01.830 fused_ordering(833) 00:15:01.830 fused_ordering(834) 00:15:01.830 fused_ordering(835) 00:15:01.830 fused_ordering(836) 00:15:01.830 fused_ordering(837) 00:15:01.830 fused_ordering(838) 00:15:01.830 fused_ordering(839) 00:15:01.830 fused_ordering(840) 00:15:01.830 fused_ordering(841) 00:15:01.830 fused_ordering(842) 00:15:01.830 fused_ordering(843) 00:15:01.830 fused_ordering(844) 00:15:01.830 fused_ordering(845) 00:15:01.830 fused_ordering(846) 00:15:01.830 fused_ordering(847) 00:15:01.830 fused_ordering(848) 00:15:01.830 fused_ordering(849) 00:15:01.830 fused_ordering(850) 00:15:01.830 fused_ordering(851) 00:15:01.830 fused_ordering(852) 00:15:01.830 fused_ordering(853) 00:15:01.830 fused_ordering(854) 00:15:01.830 fused_ordering(855) 00:15:01.830 fused_ordering(856) 00:15:01.830 fused_ordering(857) 00:15:01.830 fused_ordering(858) 00:15:01.830 fused_ordering(859) 00:15:01.830 fused_ordering(860) 00:15:01.830 fused_ordering(861) 00:15:01.830 fused_ordering(862) 00:15:01.830 fused_ordering(863) 00:15:01.830 fused_ordering(864) 00:15:01.830 fused_ordering(865) 00:15:01.830 fused_ordering(866) 00:15:01.830 fused_ordering(867) 00:15:01.830 fused_ordering(868) 00:15:01.830 fused_ordering(869) 00:15:01.830 fused_ordering(870) 00:15:01.830 fused_ordering(871) 00:15:01.830 fused_ordering(872) 00:15:01.830 fused_ordering(873) 00:15:01.830 fused_ordering(874) 00:15:01.830 fused_ordering(875) 00:15:01.830 fused_ordering(876) 00:15:01.830 fused_ordering(877) 00:15:01.830 fused_ordering(878) 00:15:01.830 fused_ordering(879) 00:15:01.830 fused_ordering(880) 00:15:01.830 fused_ordering(881) 00:15:01.830 fused_ordering(882) 00:15:01.830 fused_ordering(883) 00:15:01.830 fused_ordering(884) 00:15:01.830 fused_ordering(885) 00:15:01.830 fused_ordering(886) 00:15:01.830 fused_ordering(887) 00:15:01.830 fused_ordering(888) 00:15:01.830 fused_ordering(889) 00:15:01.830 fused_ordering(890) 00:15:01.830 fused_ordering(891) 00:15:01.830 fused_ordering(892) 00:15:01.830 fused_ordering(893) 00:15:01.830 fused_ordering(894) 00:15:01.830 fused_ordering(895) 00:15:01.830 fused_ordering(896) 00:15:01.830 fused_ordering(897) 00:15:01.830 fused_ordering(898) 00:15:01.830 fused_ordering(899) 00:15:01.830 fused_ordering(900) 00:15:01.830 fused_ordering(901) 00:15:01.830 fused_ordering(902) 00:15:01.830 fused_ordering(903) 00:15:01.830 fused_ordering(904) 00:15:01.830 fused_ordering(905) 00:15:01.830 fused_ordering(906) 00:15:01.830 fused_ordering(907) 00:15:01.830 fused_ordering(908) 00:15:01.830 fused_ordering(909) 00:15:01.830 fused_ordering(910) 00:15:01.830 fused_ordering(911) 00:15:01.830 fused_ordering(912) 00:15:01.830 fused_ordering(913) 00:15:01.830 fused_ordering(914) 00:15:01.830 fused_ordering(915) 00:15:01.830 fused_ordering(916) 00:15:01.830 fused_ordering(917) 00:15:01.830 fused_ordering(918) 00:15:01.830 fused_ordering(919) 00:15:01.830 fused_ordering(920) 00:15:01.830 fused_ordering(921) 00:15:01.830 fused_ordering(922) 00:15:01.830 fused_ordering(923) 00:15:01.830 fused_ordering(924) 00:15:01.830 fused_ordering(925) 00:15:01.830 fused_ordering(926) 00:15:01.830 fused_ordering(927) 00:15:01.830 fused_ordering(928) 00:15:01.830 fused_ordering(929) 00:15:01.830 fused_ordering(930) 00:15:01.830 fused_ordering(931) 00:15:01.830 fused_ordering(932) 00:15:01.830 fused_ordering(933) 00:15:01.830 fused_ordering(934) 00:15:01.830 fused_ordering(935) 00:15:01.830 fused_ordering(936) 00:15:01.830 fused_ordering(937) 00:15:01.830 fused_ordering(938) 00:15:01.830 fused_ordering(939) 00:15:01.830 fused_ordering(940) 00:15:01.830 fused_ordering(941) 00:15:01.830 fused_ordering(942) 00:15:01.830 fused_ordering(943) 00:15:01.830 fused_ordering(944) 00:15:01.830 fused_ordering(945) 00:15:01.830 fused_ordering(946) 00:15:01.830 fused_ordering(947) 00:15:01.830 fused_ordering(948) 00:15:01.830 fused_ordering(949) 00:15:01.830 fused_ordering(950) 00:15:01.830 fused_ordering(951) 00:15:01.830 fused_ordering(952) 00:15:01.830 fused_ordering(953) 00:15:01.830 fused_ordering(954) 00:15:01.830 fused_ordering(955) 00:15:01.830 fused_ordering(956) 00:15:01.830 fused_ordering(957) 00:15:01.830 fused_ordering(958) 00:15:01.830 fused_ordering(959) 00:15:01.830 fused_ordering(960) 00:15:01.830 fused_ordering(961) 00:15:01.830 fused_ordering(962) 00:15:01.830 fused_ordering(963) 00:15:01.830 fused_ordering(964) 00:15:01.830 fused_ordering(965) 00:15:01.830 fused_ordering(966) 00:15:01.830 fused_ordering(967) 00:15:01.830 fused_ordering(968) 00:15:01.830 fused_ordering(969) 00:15:01.830 fused_ordering(970) 00:15:01.830 fused_ordering(971) 00:15:01.830 fused_ordering(972) 00:15:01.830 fused_ordering(973) 00:15:01.830 fused_ordering(974) 00:15:01.830 fused_ordering(975) 00:15:01.830 fused_ordering(976) 00:15:01.830 fused_ordering(977) 00:15:01.830 fused_ordering(978) 00:15:01.830 fused_ordering(979) 00:15:01.830 fused_ordering(980) 00:15:01.830 fused_ordering(981) 00:15:01.830 fused_ordering(982) 00:15:01.830 fused_ordering(983) 00:15:01.830 fused_ordering(984) 00:15:01.830 fused_ordering(985) 00:15:01.830 fused_ordering(986) 00:15:01.830 fused_ordering(987) 00:15:01.830 fused_ordering(988) 00:15:01.830 fused_ordering(989) 00:15:01.830 fused_ordering(990) 00:15:01.830 fused_ordering(991) 00:15:01.830 fused_ordering(992) 00:15:01.830 fused_ordering(993) 00:15:01.830 fused_ordering(994) 00:15:01.830 fused_ordering(995) 00:15:01.830 fused_ordering(996) 00:15:01.830 fused_ordering(997) 00:15:01.830 fused_ordering(998) 00:15:01.830 fused_ordering(999) 00:15:01.830 fused_ordering(1000) 00:15:01.830 fused_ordering(1001) 00:15:01.830 fused_ordering(1002) 00:15:01.830 fused_ordering(1003) 00:15:01.830 fused_ordering(1004) 00:15:01.830 fused_ordering(1005) 00:15:01.830 fused_ordering(1006) 00:15:01.830 fused_ordering(1007) 00:15:01.830 fused_ordering(1008) 00:15:01.830 fused_ordering(1009) 00:15:01.830 fused_ordering(1010) 00:15:01.830 fused_ordering(1011) 00:15:01.830 fused_ordering(1012) 00:15:01.830 fused_ordering(1013) 00:15:01.830 fused_ordering(1014) 00:15:01.830 fused_ordering(1015) 00:15:01.830 fused_ordering(1016) 00:15:01.830 fused_ordering(1017) 00:15:01.830 fused_ordering(1018) 00:15:01.830 fused_ordering(1019) 00:15:01.830 fused_ordering(1020) 00:15:01.830 fused_ordering(1021) 00:15:01.830 fused_ordering(1022) 00:15:01.830 fused_ordering(1023) 00:15:01.830 12:09:14 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:01.830 12:09:14 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:01.830 12:09:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.830 12:09:14 -- nvmf/common.sh@116 -- # sync 00:15:01.830 12:09:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.830 12:09:14 -- nvmf/common.sh@119 -- # set +e 00:15:01.830 12:09:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.830 12:09:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.830 rmmod nvme_tcp 00:15:01.830 rmmod nvme_fabrics 00:15:02.090 rmmod nvme_keyring 00:15:02.091 12:09:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.091 12:09:14 -- nvmf/common.sh@123 -- # set -e 00:15:02.091 12:09:14 -- nvmf/common.sh@124 -- # return 0 00:15:02.091 12:09:14 -- nvmf/common.sh@477 -- # '[' -n 1412502 ']' 00:15:02.091 12:09:14 -- nvmf/common.sh@478 -- # killprocess 1412502 00:15:02.091 12:09:14 -- common/autotest_common.sh@926 -- # '[' -z 1412502 ']' 00:15:02.091 12:09:14 -- common/autotest_common.sh@930 -- # kill -0 1412502 00:15:02.091 12:09:14 -- common/autotest_common.sh@931 -- # uname 00:15:02.091 12:09:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.091 12:09:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1412502 00:15:02.091 12:09:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:02.091 12:09:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:02.091 12:09:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1412502' 00:15:02.091 killing process with pid 1412502 00:15:02.091 12:09:14 -- common/autotest_common.sh@945 -- # kill 1412502 00:15:02.091 12:09:14 -- common/autotest_common.sh@950 -- # wait 1412502 00:15:02.091 12:09:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.091 12:09:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.091 12:09:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.091 12:09:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.091 12:09:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.091 12:09:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.091 12:09:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.091 12:09:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.633 12:09:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:04.633 00:15:04.633 real 0m12.704s 00:15:04.633 user 0m6.217s 00:15:04.633 sys 0m6.688s 00:15:04.633 12:09:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.633 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:15:04.633 ************************************ 00:15:04.633 END TEST nvmf_fused_ordering 00:15:04.633 ************************************ 00:15:04.633 12:09:17 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:04.633 12:09:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:04.633 12:09:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:04.633 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:15:04.633 ************************************ 00:15:04.633 START TEST nvmf_delete_subsystem 00:15:04.633 ************************************ 00:15:04.633 12:09:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:04.633 * Looking for test storage... 00:15:04.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.633 12:09:17 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.633 12:09:17 -- nvmf/common.sh@7 -- # uname -s 00:15:04.633 12:09:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.633 12:09:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.633 12:09:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.633 12:09:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.633 12:09:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.634 12:09:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.634 12:09:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.634 12:09:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.634 12:09:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.634 12:09:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.634 12:09:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.634 12:09:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.634 12:09:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.634 12:09:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.634 12:09:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.634 12:09:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.634 12:09:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.634 12:09:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.634 12:09:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.634 12:09:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.634 12:09:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.634 12:09:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.634 12:09:17 -- paths/export.sh@5 -- # export PATH 00:15:04.634 12:09:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.634 12:09:17 -- nvmf/common.sh@46 -- # : 0 00:15:04.634 12:09:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.634 12:09:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.634 12:09:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.634 12:09:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.634 12:09:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.634 12:09:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.634 12:09:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.634 12:09:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.634 12:09:17 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:04.634 12:09:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:04.634 12:09:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.634 12:09:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:04.634 12:09:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:04.634 12:09:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:04.634 12:09:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.634 12:09:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.634 12:09:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.634 12:09:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:04.634 12:09:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:04.634 12:09:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:04.634 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:15:11.304 12:09:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.304 12:09:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:11.304 12:09:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:11.304 12:09:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:11.304 12:09:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:11.304 12:09:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:11.304 12:09:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:11.304 12:09:24 -- nvmf/common.sh@294 -- # net_devs=() 00:15:11.304 12:09:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:11.304 12:09:24 -- nvmf/common.sh@295 -- # e810=() 00:15:11.304 12:09:24 -- nvmf/common.sh@295 -- # local -ga e810 00:15:11.304 12:09:24 -- nvmf/common.sh@296 -- # x722=() 00:15:11.304 12:09:24 -- nvmf/common.sh@296 -- # local -ga x722 00:15:11.304 12:09:24 -- nvmf/common.sh@297 -- # mlx=() 00:15:11.304 12:09:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:11.304 12:09:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.304 12:09:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:11.304 12:09:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:11.304 12:09:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:11.304 12:09:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.304 12:09:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:11.304 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:11.304 12:09:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.304 12:09:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:11.304 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:11.304 12:09:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:11.304 12:09:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.304 12:09:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.304 12:09:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.304 12:09:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.304 12:09:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:11.304 Found net devices under 0000:31:00.0: cvl_0_0 00:15:11.304 12:09:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.304 12:09:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.304 12:09:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.304 12:09:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.304 12:09:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.304 12:09:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:11.304 Found net devices under 0000:31:00.1: cvl_0_1 00:15:11.304 12:09:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.304 12:09:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:11.304 12:09:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:11.304 12:09:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:11.304 12:09:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:11.304 12:09:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.304 12:09:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.304 12:09:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.304 12:09:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:11.304 12:09:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.304 12:09:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.304 12:09:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:11.304 12:09:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.304 12:09:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.304 12:09:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:11.304 12:09:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:11.304 12:09:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.304 12:09:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.565 12:09:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.565 12:09:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.565 12:09:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:11.565 12:09:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.565 12:09:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.565 12:09:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.565 12:09:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:11.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:15:11.565 00:15:11.565 --- 10.0.0.2 ping statistics --- 00:15:11.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.565 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:15:11.565 12:09:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:15:11.565 00:15:11.565 --- 10.0.0.1 ping statistics --- 00:15:11.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.565 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:15:11.565 12:09:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.565 12:09:24 -- nvmf/common.sh@410 -- # return 0 00:15:11.565 12:09:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.565 12:09:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.565 12:09:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:11.565 12:09:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:11.566 12:09:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.566 12:09:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:11.566 12:09:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:11.566 12:09:24 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:11.566 12:09:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.566 12:09:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:11.566 12:09:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.566 12:09:24 -- nvmf/common.sh@469 -- # nvmfpid=1417278 00:15:11.566 12:09:24 -- nvmf/common.sh@470 -- # waitforlisten 1417278 00:15:11.566 12:09:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:11.566 12:09:24 -- common/autotest_common.sh@819 -- # '[' -z 1417278 ']' 00:15:11.566 12:09:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.566 12:09:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:11.566 12:09:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.566 12:09:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:11.566 12:09:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.826 [2024-06-11 12:09:24.603426] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:11.826 [2024-06-11 12:09:24.603490] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.826 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.826 [2024-06-11 12:09:24.678619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:11.826 [2024-06-11 12:09:24.715283] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.826 [2024-06-11 12:09:24.715416] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.826 [2024-06-11 12:09:24.715426] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.826 [2024-06-11 12:09:24.715434] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.826 [2024-06-11 12:09:24.715582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.826 [2024-06-11 12:09:24.715584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.397 12:09:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.397 12:09:25 -- common/autotest_common.sh@852 -- # return 0 00:15:12.397 12:09:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.397 12:09:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:12.397 12:09:25 -- common/autotest_common.sh@10 -- # set +x 00:15:12.397 12:09:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.397 12:09:25 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.397 12:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.397 12:09:25 -- common/autotest_common.sh@10 -- # set +x 00:15:12.397 [2024-06-11 12:09:25.410309] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.397 12:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.397 12:09:25 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:12.397 12:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.397 12:09:25 -- common/autotest_common.sh@10 -- # set +x 00:15:12.397 12:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.397 12:09:25 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.397 12:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.397 12:09:25 -- common/autotest_common.sh@10 -- # set +x 00:15:12.397 [2024-06-11 12:09:25.426449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.397 12:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.397 12:09:25 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:12.657 12:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.657 12:09:25 -- common/autotest_common.sh@10 -- # set +x 00:15:12.657 NULL1 00:15:12.657 12:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.657 12:09:25 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:12.657 12:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.657 12:09:25 -- common/autotest_common.sh@10 -- # set +x 00:15:12.657 Delay0 00:15:12.657 12:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.657 12:09:25 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.657 12:09:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.657 12:09:25 -- common/autotest_common.sh@10 -- # set +x 00:15:12.657 12:09:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.657 12:09:25 -- target/delete_subsystem.sh@28 -- # perf_pid=1417624 00:15:12.657 12:09:25 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:12.657 12:09:25 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:12.657 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.657 [2024-06-11 12:09:25.511086] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:14.567 12:09:27 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.567 12:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.567 12:09:27 -- common/autotest_common.sh@10 -- # set +x 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 starting I/O failed: -6 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 starting I/O failed: -6 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 starting I/O failed: -6 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 starting I/O failed: -6 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 starting I/O failed: -6 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 starting I/O failed: -6 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Write completed with error (sct=0, sc=8) 00:15:14.827 starting I/O failed: -6 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.827 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 starting I/O failed: -6 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 starting I/O failed: -6 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 [2024-06-11 12:09:27.638278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f358800c350 is same with the state(5) to be set 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Write completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:14.828 Read completed with error (sct=0, sc=8) 00:15:15.771 [2024-06-11 12:09:28.608508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbca670 is same with the state(5) to be set 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 [2024-06-11 12:09:28.640592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd2c0 is same with the state(5) to be set 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 [2024-06-11 12:09:28.640883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdea0 is same with the state(5) to be set 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 [2024-06-11 12:09:28.641268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f358800bf20 is same with the state(5) to be set 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Write completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 Read completed with error (sct=0, sc=8) 00:15:15.771 [2024-06-11 12:09:28.641340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f358800c600 is same with the state(5) to be set 00:15:15.771 [2024-06-11 12:09:28.641754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbca670 (9): Bad file descriptor 00:15:15.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:15.771 Initializing NVMe Controllers 00:15:15.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.772 Controller IO queue size 128, less than required. 00:15:15.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:15.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:15.772 Initialization complete. Launching workers. 00:15:15.772 ======================================================== 00:15:15.772 Latency(us) 00:15:15.772 Device Information : IOPS MiB/s Average min max 00:15:15.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.23 0.09 895479.35 303.32 1008757.92 00:15:15.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 142.42 0.07 1022167.60 240.22 2002707.24 00:15:15.772 ======================================================== 00:15:15.772 Total : 331.64 0.16 949883.01 240.22 2002707.24 00:15:15.772 00:15:15.772 12:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@35 -- # kill -0 1417624 00:15:15.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1417624) - No such process 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@45 -- # NOT wait 1417624 00:15:15.772 12:09:28 -- common/autotest_common.sh@640 -- # local es=0 00:15:15.772 12:09:28 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1417624 00:15:15.772 12:09:28 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:15.772 12:09:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:15.772 12:09:28 -- common/autotest_common.sh@632 -- # type -t wait 00:15:15.772 12:09:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:15.772 12:09:28 -- common/autotest_common.sh@643 -- # wait 1417624 00:15:15.772 12:09:28 -- common/autotest_common.sh@643 -- # es=1 00:15:15.772 12:09:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:15.772 12:09:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:15.772 12:09:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:15.772 12:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.772 12:09:28 -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 12:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.772 12:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.772 12:09:28 -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 [2024-06-11 12:09:28.669121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.772 12:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.772 12:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.772 12:09:28 -- common/autotest_common.sh@10 -- # set +x 00:15:15.772 12:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@54 -- # perf_pid=1418213 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:15.772 12:09:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:15.772 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.772 [2024-06-11 12:09:28.741936] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:16.344 12:09:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:16.344 12:09:29 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:16.344 12:09:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:16.915 12:09:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:16.915 12:09:29 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:16.915 12:09:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:17.175 12:09:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:17.175 12:09:30 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:17.175 12:09:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:17.744 12:09:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:17.744 12:09:30 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:17.744 12:09:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:18.313 12:09:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:18.313 12:09:31 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:18.313 12:09:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:18.883 12:09:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:18.883 12:09:31 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:18.883 12:09:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:18.883 Initializing NVMe Controllers 00:15:18.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:18.884 Controller IO queue size 128, less than required. 00:15:18.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:18.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:18.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:18.884 Initialization complete. Launching workers. 00:15:18.884 ======================================================== 00:15:18.884 Latency(us) 00:15:18.884 Device Information : IOPS MiB/s Average min max 00:15:18.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001971.58 1000175.89 1040609.06 00:15:18.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003097.21 1000254.79 1041566.27 00:15:18.884 ======================================================== 00:15:18.884 Total : 256.00 0.12 1002534.40 1000175.89 1041566.27 00:15:18.884 00:15:19.454 12:09:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:19.454 12:09:32 -- target/delete_subsystem.sh@57 -- # kill -0 1418213 00:15:19.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1418213) - No such process 00:15:19.454 12:09:32 -- target/delete_subsystem.sh@67 -- # wait 1418213 00:15:19.454 12:09:32 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:19.454 12:09:32 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:19.454 12:09:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.454 12:09:32 -- nvmf/common.sh@116 -- # sync 00:15:19.454 12:09:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.454 12:09:32 -- nvmf/common.sh@119 -- # set +e 00:15:19.454 12:09:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.454 12:09:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.454 rmmod nvme_tcp 00:15:19.454 rmmod nvme_fabrics 00:15:19.454 rmmod nvme_keyring 00:15:19.454 12:09:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.454 12:09:32 -- nvmf/common.sh@123 -- # set -e 00:15:19.454 12:09:32 -- nvmf/common.sh@124 -- # return 0 00:15:19.454 12:09:32 -- nvmf/common.sh@477 -- # '[' -n 1417278 ']' 00:15:19.454 12:09:32 -- nvmf/common.sh@478 -- # killprocess 1417278 00:15:19.454 12:09:32 -- common/autotest_common.sh@926 -- # '[' -z 1417278 ']' 00:15:19.454 12:09:32 -- common/autotest_common.sh@930 -- # kill -0 1417278 00:15:19.454 12:09:32 -- common/autotest_common.sh@931 -- # uname 00:15:19.454 12:09:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:19.454 12:09:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1417278 00:15:19.454 12:09:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:19.454 12:09:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:19.454 12:09:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1417278' 00:15:19.454 killing process with pid 1417278 00:15:19.454 12:09:32 -- common/autotest_common.sh@945 -- # kill 1417278 00:15:19.454 12:09:32 -- common/autotest_common.sh@950 -- # wait 1417278 00:15:19.454 12:09:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.454 12:09:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.454 12:09:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.454 12:09:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.454 12:09:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.454 12:09:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.454 12:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.454 12:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.998 12:09:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:21.998 00:15:21.998 real 0m17.326s 00:15:21.998 user 0m29.691s 00:15:21.998 sys 0m6.149s 00:15:21.998 12:09:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.998 12:09:34 -- common/autotest_common.sh@10 -- # set +x 00:15:21.998 ************************************ 00:15:21.998 END TEST nvmf_delete_subsystem 00:15:21.998 ************************************ 00:15:21.998 12:09:34 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:21.998 12:09:34 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.998 12:09:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:21.998 12:09:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:21.998 12:09:34 -- common/autotest_common.sh@10 -- # set +x 00:15:21.998 ************************************ 00:15:21.998 START TEST nvmf_nvme_cli 00:15:21.998 ************************************ 00:15:21.998 12:09:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.998 * Looking for test storage... 00:15:21.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.998 12:09:34 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.998 12:09:34 -- nvmf/common.sh@7 -- # uname -s 00:15:21.998 12:09:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.998 12:09:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.998 12:09:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.998 12:09:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.998 12:09:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.998 12:09:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.998 12:09:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.998 12:09:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.998 12:09:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.999 12:09:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.999 12:09:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.999 12:09:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.999 12:09:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.999 12:09:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.999 12:09:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.999 12:09:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.999 12:09:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.999 12:09:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.999 12:09:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.999 12:09:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.999 12:09:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.999 12:09:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.999 12:09:34 -- paths/export.sh@5 -- # export PATH 00:15:21.999 12:09:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.999 12:09:34 -- nvmf/common.sh@46 -- # : 0 00:15:21.999 12:09:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:21.999 12:09:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:21.999 12:09:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:21.999 12:09:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.999 12:09:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.999 12:09:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:21.999 12:09:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:21.999 12:09:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:21.999 12:09:34 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.999 12:09:34 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.999 12:09:34 -- target/nvme_cli.sh@14 -- # devs=() 00:15:21.999 12:09:34 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:21.999 12:09:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:21.999 12:09:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.999 12:09:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:21.999 12:09:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:21.999 12:09:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:21.999 12:09:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.999 12:09:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.999 12:09:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.999 12:09:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:21.999 12:09:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:21.999 12:09:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:21.999 12:09:34 -- common/autotest_common.sh@10 -- # set +x 00:15:28.587 12:09:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:28.587 12:09:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:28.587 12:09:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:28.587 12:09:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:28.587 12:09:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:28.587 12:09:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:28.587 12:09:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:28.587 12:09:41 -- nvmf/common.sh@294 -- # net_devs=() 00:15:28.587 12:09:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:28.587 12:09:41 -- nvmf/common.sh@295 -- # e810=() 00:15:28.587 12:09:41 -- nvmf/common.sh@295 -- # local -ga e810 00:15:28.587 12:09:41 -- nvmf/common.sh@296 -- # x722=() 00:15:28.587 12:09:41 -- nvmf/common.sh@296 -- # local -ga x722 00:15:28.587 12:09:41 -- nvmf/common.sh@297 -- # mlx=() 00:15:28.587 12:09:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:28.587 12:09:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.587 12:09:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:28.587 12:09:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:28.587 12:09:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:28.587 12:09:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:28.587 12:09:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:28.587 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:28.587 12:09:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:28.587 12:09:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:28.587 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:28.587 12:09:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:28.587 12:09:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:28.587 12:09:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:28.587 12:09:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.587 12:09:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:28.587 12:09:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.587 12:09:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:28.587 Found net devices under 0000:31:00.0: cvl_0_0 00:15:28.587 12:09:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.587 12:09:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:28.587 12:09:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.587 12:09:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:28.587 12:09:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.587 12:09:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:28.587 Found net devices under 0000:31:00.1: cvl_0_1 00:15:28.587 12:09:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.587 12:09:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:28.588 12:09:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:28.588 12:09:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:28.588 12:09:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:28.588 12:09:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:28.588 12:09:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.588 12:09:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.588 12:09:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.588 12:09:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:28.588 12:09:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.588 12:09:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.588 12:09:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:28.588 12:09:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.588 12:09:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.588 12:09:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:28.588 12:09:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:28.588 12:09:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.588 12:09:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.849 12:09:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.849 12:09:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.849 12:09:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:28.849 12:09:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.849 12:09:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.110 12:09:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.110 12:09:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:29.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:15:29.110 00:15:29.110 --- 10.0.0.2 ping statistics --- 00:15:29.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.110 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:15:29.110 12:09:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:15:29.110 00:15:29.110 --- 10.0.0.1 ping statistics --- 00:15:29.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.110 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:29.110 12:09:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.110 12:09:41 -- nvmf/common.sh@410 -- # return 0 00:15:29.110 12:09:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:29.110 12:09:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.110 12:09:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:29.110 12:09:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:29.110 12:09:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.110 12:09:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:29.110 12:09:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:29.110 12:09:41 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:29.110 12:09:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:29.110 12:09:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:29.110 12:09:41 -- common/autotest_common.sh@10 -- # set +x 00:15:29.110 12:09:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.110 12:09:41 -- nvmf/common.sh@469 -- # nvmfpid=1423076 00:15:29.111 12:09:41 -- nvmf/common.sh@470 -- # waitforlisten 1423076 00:15:29.111 12:09:41 -- common/autotest_common.sh@819 -- # '[' -z 1423076 ']' 00:15:29.111 12:09:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.111 12:09:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:29.111 12:09:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.111 12:09:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:29.111 12:09:41 -- common/autotest_common.sh@10 -- # set +x 00:15:29.111 [2024-06-11 12:09:41.970209] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:29.111 [2024-06-11 12:09:41.970258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.111 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.111 [2024-06-11 12:09:42.033849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.111 [2024-06-11 12:09:42.066263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:29.111 [2024-06-11 12:09:42.066392] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.111 [2024-06-11 12:09:42.066403] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.111 [2024-06-11 12:09:42.066410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.111 [2024-06-11 12:09:42.066564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.111 [2024-06-11 12:09:42.066686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.111 [2024-06-11 12:09:42.066706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.111 [2024-06-11 12:09:42.066715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.051 12:09:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:30.051 12:09:42 -- common/autotest_common.sh@852 -- # return 0 00:15:30.051 12:09:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:30.051 12:09:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 12:09:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.051 12:09:42 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 [2024-06-11 12:09:42.839531] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.051 12:09:42 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 Malloc0 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.051 12:09:42 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 Malloc1 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.051 12:09:42 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.051 12:09:42 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.051 12:09:42 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.051 12:09:42 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 [2024-06-11 12:09:42.926415] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.051 12:09:42 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:30.051 12:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.051 12:09:42 -- common/autotest_common.sh@10 -- # set +x 00:15:30.051 12:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.052 12:09:42 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:30.052 00:15:30.052 Discovery Log Number of Records 2, Generation counter 2 00:15:30.052 =====Discovery Log Entry 0====== 00:15:30.052 trtype: tcp 00:15:30.052 adrfam: ipv4 00:15:30.052 subtype: current discovery subsystem 00:15:30.052 treq: not required 00:15:30.052 portid: 0 00:15:30.052 trsvcid: 4420 00:15:30.052 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:30.052 traddr: 10.0.0.2 00:15:30.052 eflags: explicit discovery connections, duplicate discovery information 00:15:30.052 sectype: none 00:15:30.052 =====Discovery Log Entry 1====== 00:15:30.052 trtype: tcp 00:15:30.052 adrfam: ipv4 00:15:30.052 subtype: nvme subsystem 00:15:30.052 treq: not required 00:15:30.052 portid: 0 00:15:30.052 trsvcid: 4420 00:15:30.052 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:30.052 traddr: 10.0.0.2 00:15:30.052 eflags: none 00:15:30.052 sectype: none 00:15:30.052 12:09:43 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:30.052 12:09:43 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:30.052 12:09:43 -- nvmf/common.sh@510 -- # local dev _ 00:15:30.052 12:09:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:30.052 12:09:43 -- nvmf/common.sh@509 -- # nvme list 00:15:30.052 12:09:43 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:30.052 12:09:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:30.052 12:09:43 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:30.052 12:09:43 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:30.052 12:09:43 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:30.052 12:09:43 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:31.965 12:09:44 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:31.965 12:09:44 -- common/autotest_common.sh@1177 -- # local i=0 00:15:31.965 12:09:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.965 12:09:44 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:31.965 12:09:44 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:31.965 12:09:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:33.914 12:09:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:33.914 12:09:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:33.914 12:09:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.914 12:09:46 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:33.914 12:09:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.914 12:09:46 -- common/autotest_common.sh@1187 -- # return 0 00:15:33.914 12:09:46 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:33.914 12:09:46 -- nvmf/common.sh@510 -- # local dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@509 -- # nvme list 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:33.914 /dev/nvme0n1 ]] 00:15:33.914 12:09:46 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:33.914 12:09:46 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:33.914 12:09:46 -- nvmf/common.sh@510 -- # local dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@509 -- # nvme list 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:33.914 12:09:46 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:33.914 12:09:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:33.914 12:09:46 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:33.914 12:09:46 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.914 12:09:46 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.914 12:09:46 -- common/autotest_common.sh@1198 -- # local i=0 00:15:33.914 12:09:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:33.914 12:09:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.914 12:09:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:33.914 12:09:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.914 12:09:46 -- common/autotest_common.sh@1210 -- # return 0 00:15:33.914 12:09:46 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:33.914 12:09:46 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.914 12:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.914 12:09:46 -- common/autotest_common.sh@10 -- # set +x 00:15:33.914 12:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.914 12:09:46 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:33.914 12:09:46 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:33.914 12:09:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:33.914 12:09:46 -- nvmf/common.sh@116 -- # sync 00:15:33.914 12:09:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:33.914 12:09:46 -- nvmf/common.sh@119 -- # set +e 00:15:33.914 12:09:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:33.914 12:09:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:33.914 rmmod nvme_tcp 00:15:33.914 rmmod nvme_fabrics 00:15:33.914 rmmod nvme_keyring 00:15:33.914 12:09:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:33.914 12:09:46 -- nvmf/common.sh@123 -- # set -e 00:15:33.914 12:09:46 -- nvmf/common.sh@124 -- # return 0 00:15:33.914 12:09:46 -- nvmf/common.sh@477 -- # '[' -n 1423076 ']' 00:15:33.914 12:09:46 -- nvmf/common.sh@478 -- # killprocess 1423076 00:15:33.914 12:09:46 -- common/autotest_common.sh@926 -- # '[' -z 1423076 ']' 00:15:33.914 12:09:46 -- common/autotest_common.sh@930 -- # kill -0 1423076 00:15:33.914 12:09:46 -- common/autotest_common.sh@931 -- # uname 00:15:33.914 12:09:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:33.914 12:09:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1423076 00:15:33.914 12:09:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:33.914 12:09:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:33.914 12:09:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1423076' 00:15:33.914 killing process with pid 1423076 00:15:33.914 12:09:46 -- common/autotest_common.sh@945 -- # kill 1423076 00:15:33.914 12:09:46 -- common/autotest_common.sh@950 -- # wait 1423076 00:15:34.176 12:09:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:34.176 12:09:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:34.176 12:09:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:34.176 12:09:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.176 12:09:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:34.176 12:09:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.176 12:09:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.176 12:09:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.089 12:09:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:36.090 00:15:36.090 real 0m14.440s 00:15:36.090 user 0m21.607s 00:15:36.090 sys 0m5.825s 00:15:36.090 12:09:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.090 12:09:49 -- common/autotest_common.sh@10 -- # set +x 00:15:36.090 ************************************ 00:15:36.090 END TEST nvmf_nvme_cli 00:15:36.090 ************************************ 00:15:36.090 12:09:49 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:36.090 12:09:49 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:36.090 12:09:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:36.090 12:09:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:36.090 12:09:49 -- common/autotest_common.sh@10 -- # set +x 00:15:36.090 ************************************ 00:15:36.090 START TEST nvmf_vfio_user 00:15:36.090 ************************************ 00:15:36.090 12:09:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:36.351 * Looking for test storage... 00:15:36.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.351 12:09:49 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.351 12:09:49 -- nvmf/common.sh@7 -- # uname -s 00:15:36.351 12:09:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.351 12:09:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.351 12:09:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.351 12:09:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.351 12:09:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.351 12:09:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.351 12:09:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.351 12:09:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.351 12:09:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.351 12:09:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.351 12:09:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:36.351 12:09:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:36.351 12:09:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.351 12:09:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.351 12:09:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.351 12:09:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.351 12:09:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.351 12:09:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.351 12:09:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.351 12:09:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.351 12:09:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.351 12:09:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.351 12:09:49 -- paths/export.sh@5 -- # export PATH 00:15:36.351 12:09:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.351 12:09:49 -- nvmf/common.sh@46 -- # : 0 00:15:36.351 12:09:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:36.351 12:09:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:36.351 12:09:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:36.351 12:09:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.351 12:09:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.351 12:09:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:36.351 12:09:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:36.351 12:09:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:36.351 12:09:49 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:36.351 12:09:49 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1424679 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1424679' 00:15:36.352 Process pid: 1424679 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1424679 00:15:36.352 12:09:49 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:36.352 12:09:49 -- common/autotest_common.sh@819 -- # '[' -z 1424679 ']' 00:15:36.352 12:09:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.352 12:09:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:36.352 12:09:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.352 12:09:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:36.352 12:09:49 -- common/autotest_common.sh@10 -- # set +x 00:15:36.352 [2024-06-11 12:09:49.270287] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:36.352 [2024-06-11 12:09:49.270364] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.352 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.352 [2024-06-11 12:09:49.329434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.352 [2024-06-11 12:09:49.361259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:36.352 [2024-06-11 12:09:49.361395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.352 [2024-06-11 12:09:49.361406] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.352 [2024-06-11 12:09:49.361414] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.352 [2024-06-11 12:09:49.361570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.352 [2024-06-11 12:09:49.361703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.352 [2024-06-11 12:09:49.361860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.352 [2024-06-11 12:09:49.361861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.295 12:09:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:37.295 12:09:50 -- common/autotest_common.sh@852 -- # return 0 00:15:37.295 12:09:50 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:38.238 12:09:51 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:38.238 12:09:51 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:38.238 12:09:51 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:38.238 12:09:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.238 12:09:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:38.238 12:09:51 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:38.499 Malloc1 00:15:38.499 12:09:51 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:38.758 12:09:51 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:38.758 12:09:51 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:39.019 12:09:51 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.019 12:09:51 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:39.019 12:09:51 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:39.019 Malloc2 00:15:39.019 12:09:52 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:39.278 12:09:52 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:39.538 12:09:52 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:39.538 12:09:52 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:39.538 12:09:52 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:39.538 12:09:52 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.538 12:09:52 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:39.538 12:09:52 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:39.538 12:09:52 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:39.538 [2024-06-11 12:09:52.546510] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:39.538 [2024-06-11 12:09:52.546553] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425316 ] 00:15:39.538 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.801 [2024-06-11 12:09:52.580645] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:39.801 [2024-06-11 12:09:52.585301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:39.801 [2024-06-11 12:09:52.585319] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe7cf9aa000 00:15:39.801 [2024-06-11 12:09:52.586298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.587299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.588312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.589319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.590328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.591334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.592340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.593347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.801 [2024-06-11 12:09:52.594350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:39.801 [2024-06-11 12:09:52.594360] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe7ce770000 00:15:39.801 [2024-06-11 12:09:52.595687] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:39.801 [2024-06-11 12:09:52.616176] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:39.801 [2024-06-11 12:09:52.616201] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:39.801 [2024-06-11 12:09:52.618487] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:39.801 [2024-06-11 12:09:52.618536] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:39.801 [2024-06-11 12:09:52.618620] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:39.801 [2024-06-11 12:09:52.618636] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:39.801 [2024-06-11 12:09:52.618642] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:39.801 [2024-06-11 12:09:52.619487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:39.801 [2024-06-11 12:09:52.619498] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:39.801 [2024-06-11 12:09:52.619505] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:39.801 [2024-06-11 12:09:52.620486] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:39.801 [2024-06-11 12:09:52.620497] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:39.801 [2024-06-11 12:09:52.620504] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:39.801 [2024-06-11 12:09:52.621502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:39.801 [2024-06-11 12:09:52.621510] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:39.801 [2024-06-11 12:09:52.622509] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:39.801 [2024-06-11 12:09:52.622517] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:39.801 [2024-06-11 12:09:52.622525] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:39.801 [2024-06-11 12:09:52.622532] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:39.801 [2024-06-11 12:09:52.622637] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:39.801 [2024-06-11 12:09:52.622641] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:39.801 [2024-06-11 12:09:52.622646] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:39.801 [2024-06-11 12:09:52.623512] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:39.801 [2024-06-11 12:09:52.624518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:39.801 [2024-06-11 12:09:52.625531] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:39.801 [2024-06-11 12:09:52.626546] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:39.801 [2024-06-11 12:09:52.627535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:39.801 [2024-06-11 12:09:52.627544] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:39.801 [2024-06-11 12:09:52.627548] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627569] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:39.801 [2024-06-11 12:09:52.627577] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627590] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.801 [2024-06-11 12:09:52.627595] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.801 [2024-06-11 12:09:52.627607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.801 [2024-06-11 12:09:52.627651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:39.801 [2024-06-11 12:09:52.627659] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:39.801 [2024-06-11 12:09:52.627665] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:39.801 [2024-06-11 12:09:52.627670] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:39.801 [2024-06-11 12:09:52.627674] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:39.801 [2024-06-11 12:09:52.627679] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:39.801 [2024-06-11 12:09:52.627684] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:39.801 [2024-06-11 12:09:52.627688] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627700] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:39.801 [2024-06-11 12:09:52.627719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:39.801 [2024-06-11 12:09:52.627729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.801 [2024-06-11 12:09:52.627737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.801 [2024-06-11 12:09:52.627745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.801 [2024-06-11 12:09:52.627754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.801 [2024-06-11 12:09:52.627758] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627768] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:39.801 [2024-06-11 12:09:52.627788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:39.801 [2024-06-11 12:09:52.627793] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:39.801 [2024-06-11 12:09:52.627799] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627805] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627812] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:39.801 [2024-06-11 12:09:52.627832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:39.801 [2024-06-11 12:09:52.627878] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627886] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:39.801 [2024-06-11 12:09:52.627893] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:39.801 [2024-06-11 12:09:52.627897] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:39.802 [2024-06-11 12:09:52.627903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.627914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.627922] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:39.802 [2024-06-11 12:09:52.627934] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.627943] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.627949] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.802 [2024-06-11 12:09:52.627953] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.802 [2024-06-11 12:09:52.627959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.627976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.627987] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.627994] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.628001] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.802 [2024-06-11 12:09:52.628005] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.802 [2024-06-11 12:09:52.628011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628031] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.628037] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.628044] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.628050] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.628055] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.628060] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:39.802 [2024-06-11 12:09:52.628064] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:39.802 [2024-06-11 12:09:52.628070] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:39.802 [2024-06-11 12:09:52.628086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628168] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:39.802 [2024-06-11 12:09:52.628173] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:39.802 [2024-06-11 12:09:52.628176] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:39.802 [2024-06-11 12:09:52.628179] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:39.802 [2024-06-11 12:09:52.628186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:39.802 [2024-06-11 12:09:52.628193] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:39.802 [2024-06-11 12:09:52.628197] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:39.802 [2024-06-11 12:09:52.628203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628210] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:39.802 [2024-06-11 12:09:52.628214] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.802 [2024-06-11 12:09:52.628220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628227] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:39.802 [2024-06-11 12:09:52.628231] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:39.802 [2024-06-11 12:09:52.628237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:39.802 [2024-06-11 12:09:52.628244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:39.802 [2024-06-11 12:09:52.628271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:39.802 ===================================================== 00:15:39.802 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:39.802 ===================================================== 00:15:39.802 Controller Capabilities/Features 00:15:39.802 ================================ 00:15:39.802 Vendor ID: 4e58 00:15:39.802 Subsystem Vendor ID: 4e58 00:15:39.802 Serial Number: SPDK1 00:15:39.802 Model Number: SPDK bdev Controller 00:15:39.802 Firmware Version: 24.01.1 00:15:39.802 Recommended Arb Burst: 6 00:15:39.802 IEEE OUI Identifier: 8d 6b 50 00:15:39.802 Multi-path I/O 00:15:39.802 May have multiple subsystem ports: Yes 00:15:39.802 May have multiple controllers: Yes 00:15:39.802 Associated with SR-IOV VF: No 00:15:39.802 Max Data Transfer Size: 131072 00:15:39.802 Max Number of Namespaces: 32 00:15:39.802 Max Number of I/O Queues: 127 00:15:39.802 NVMe Specification Version (VS): 1.3 00:15:39.802 NVMe Specification Version (Identify): 1.3 00:15:39.802 Maximum Queue Entries: 256 00:15:39.802 Contiguous Queues Required: Yes 00:15:39.802 Arbitration Mechanisms Supported 00:15:39.802 Weighted Round Robin: Not Supported 00:15:39.802 Vendor Specific: Not Supported 00:15:39.802 Reset Timeout: 15000 ms 00:15:39.802 Doorbell Stride: 4 bytes 00:15:39.802 NVM Subsystem Reset: Not Supported 00:15:39.802 Command Sets Supported 00:15:39.802 NVM Command Set: Supported 00:15:39.802 Boot Partition: Not Supported 00:15:39.802 Memory Page Size Minimum: 4096 bytes 00:15:39.802 Memory Page Size Maximum: 4096 bytes 00:15:39.802 Persistent Memory Region: Not Supported 00:15:39.802 Optional Asynchronous Events Supported 00:15:39.802 Namespace Attribute Notices: Supported 00:15:39.802 Firmware Activation Notices: Not Supported 00:15:39.802 ANA Change Notices: Not Supported 00:15:39.802 PLE Aggregate Log Change Notices: Not Supported 00:15:39.802 LBA Status Info Alert Notices: Not Supported 00:15:39.802 EGE Aggregate Log Change Notices: Not Supported 00:15:39.802 Normal NVM Subsystem Shutdown event: Not Supported 00:15:39.802 Zone Descriptor Change Notices: Not Supported 00:15:39.802 Discovery Log Change Notices: Not Supported 00:15:39.802 Controller Attributes 00:15:39.802 128-bit Host Identifier: Supported 00:15:39.802 Non-Operational Permissive Mode: Not Supported 00:15:39.802 NVM Sets: Not Supported 00:15:39.802 Read Recovery Levels: Not Supported 00:15:39.802 Endurance Groups: Not Supported 00:15:39.802 Predictable Latency Mode: Not Supported 00:15:39.802 Traffic Based Keep ALive: Not Supported 00:15:39.802 Namespace Granularity: Not Supported 00:15:39.802 SQ Associations: Not Supported 00:15:39.802 UUID List: Not Supported 00:15:39.802 Multi-Domain Subsystem: Not Supported 00:15:39.802 Fixed Capacity Management: Not Supported 00:15:39.802 Variable Capacity Management: Not Supported 00:15:39.802 Delete Endurance Group: Not Supported 00:15:39.802 Delete NVM Set: Not Supported 00:15:39.802 Extended LBA Formats Supported: Not Supported 00:15:39.802 Flexible Data Placement Supported: Not Supported 00:15:39.802 00:15:39.802 Controller Memory Buffer Support 00:15:39.802 ================================ 00:15:39.802 Supported: No 00:15:39.802 00:15:39.802 Persistent Memory Region Support 00:15:39.802 ================================ 00:15:39.802 Supported: No 00:15:39.802 00:15:39.802 Admin Command Set Attributes 00:15:39.802 ============================ 00:15:39.802 Security Send/Receive: Not Supported 00:15:39.802 Format NVM: Not Supported 00:15:39.802 Firmware Activate/Download: Not Supported 00:15:39.803 Namespace Management: Not Supported 00:15:39.803 Device Self-Test: Not Supported 00:15:39.803 Directives: Not Supported 00:15:39.803 NVMe-MI: Not Supported 00:15:39.803 Virtualization Management: Not Supported 00:15:39.803 Doorbell Buffer Config: Not Supported 00:15:39.803 Get LBA Status Capability: Not Supported 00:15:39.803 Command & Feature Lockdown Capability: Not Supported 00:15:39.803 Abort Command Limit: 4 00:15:39.803 Async Event Request Limit: 4 00:15:39.803 Number of Firmware Slots: N/A 00:15:39.803 Firmware Slot 1 Read-Only: N/A 00:15:39.803 Firmware Activation Without Reset: N/A 00:15:39.803 Multiple Update Detection Support: N/A 00:15:39.803 Firmware Update Granularity: No Information Provided 00:15:39.803 Per-Namespace SMART Log: No 00:15:39.803 Asymmetric Namespace Access Log Page: Not Supported 00:15:39.803 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:39.803 Command Effects Log Page: Supported 00:15:39.803 Get Log Page Extended Data: Supported 00:15:39.803 Telemetry Log Pages: Not Supported 00:15:39.803 Persistent Event Log Pages: Not Supported 00:15:39.803 Supported Log Pages Log Page: May Support 00:15:39.803 Commands Supported & Effects Log Page: Not Supported 00:15:39.803 Feature Identifiers & Effects Log Page:May Support 00:15:39.803 NVMe-MI Commands & Effects Log Page: May Support 00:15:39.803 Data Area 4 for Telemetry Log: Not Supported 00:15:39.803 Error Log Page Entries Supported: 128 00:15:39.803 Keep Alive: Supported 00:15:39.803 Keep Alive Granularity: 10000 ms 00:15:39.803 00:15:39.803 NVM Command Set Attributes 00:15:39.803 ========================== 00:15:39.803 Submission Queue Entry Size 00:15:39.803 Max: 64 00:15:39.803 Min: 64 00:15:39.803 Completion Queue Entry Size 00:15:39.803 Max: 16 00:15:39.803 Min: 16 00:15:39.803 Number of Namespaces: 32 00:15:39.803 Compare Command: Supported 00:15:39.803 Write Uncorrectable Command: Not Supported 00:15:39.803 Dataset Management Command: Supported 00:15:39.803 Write Zeroes Command: Supported 00:15:39.803 Set Features Save Field: Not Supported 00:15:39.803 Reservations: Not Supported 00:15:39.803 Timestamp: Not Supported 00:15:39.803 Copy: Supported 00:15:39.803 Volatile Write Cache: Present 00:15:39.803 Atomic Write Unit (Normal): 1 00:15:39.803 Atomic Write Unit (PFail): 1 00:15:39.803 Atomic Compare & Write Unit: 1 00:15:39.803 Fused Compare & Write: Supported 00:15:39.803 Scatter-Gather List 00:15:39.803 SGL Command Set: Supported (Dword aligned) 00:15:39.803 SGL Keyed: Not Supported 00:15:39.803 SGL Bit Bucket Descriptor: Not Supported 00:15:39.803 SGL Metadata Pointer: Not Supported 00:15:39.803 Oversized SGL: Not Supported 00:15:39.803 SGL Metadata Address: Not Supported 00:15:39.803 SGL Offset: Not Supported 00:15:39.803 Transport SGL Data Block: Not Supported 00:15:39.803 Replay Protected Memory Block: Not Supported 00:15:39.803 00:15:39.803 Firmware Slot Information 00:15:39.803 ========================= 00:15:39.803 Active slot: 1 00:15:39.803 Slot 1 Firmware Revision: 24.01.1 00:15:39.803 00:15:39.803 00:15:39.803 Commands Supported and Effects 00:15:39.803 ============================== 00:15:39.803 Admin Commands 00:15:39.803 -------------- 00:15:39.803 Get Log Page (02h): Supported 00:15:39.803 Identify (06h): Supported 00:15:39.803 Abort (08h): Supported 00:15:39.803 Set Features (09h): Supported 00:15:39.803 Get Features (0Ah): Supported 00:15:39.803 Asynchronous Event Request (0Ch): Supported 00:15:39.803 Keep Alive (18h): Supported 00:15:39.803 I/O Commands 00:15:39.803 ------------ 00:15:39.803 Flush (00h): Supported LBA-Change 00:15:39.803 Write (01h): Supported LBA-Change 00:15:39.803 Read (02h): Supported 00:15:39.803 Compare (05h): Supported 00:15:39.803 Write Zeroes (08h): Supported LBA-Change 00:15:39.803 Dataset Management (09h): Supported LBA-Change 00:15:39.803 Copy (19h): Supported LBA-Change 00:15:39.803 Unknown (79h): Supported LBA-Change 00:15:39.803 Unknown (7Ah): Supported 00:15:39.803 00:15:39.803 Error Log 00:15:39.803 ========= 00:15:39.803 00:15:39.803 Arbitration 00:15:39.803 =========== 00:15:39.803 Arbitration Burst: 1 00:15:39.803 00:15:39.803 Power Management 00:15:39.803 ================ 00:15:39.803 Number of Power States: 1 00:15:39.803 Current Power State: Power State #0 00:15:39.803 Power State #0: 00:15:39.803 Max Power: 0.00 W 00:15:39.803 Non-Operational State: Operational 00:15:39.803 Entry Latency: Not Reported 00:15:39.803 Exit Latency: Not Reported 00:15:39.803 Relative Read Throughput: 0 00:15:39.803 Relative Read Latency: 0 00:15:39.803 Relative Write Throughput: 0 00:15:39.803 Relative Write Latency: 0 00:15:39.803 Idle Power: Not Reported 00:15:39.803 Active Power: Not Reported 00:15:39.803 Non-Operational Permissive Mode: Not Supported 00:15:39.803 00:15:39.803 Health Information 00:15:39.803 ================== 00:15:39.803 Critical Warnings: 00:15:39.803 Available Spare Space: OK 00:15:39.803 Temperature: OK 00:15:39.803 Device Reliability: OK 00:15:39.803 Read Only: No 00:15:39.803 Volatile Memory Backup: OK 00:15:39.803 Current Temperature: 0 Kelvin[2024-06-11 12:09:52.628461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:39.803 [2024-06-11 12:09:52.628472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:39.803 [2024-06-11 12:09:52.628498] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:39.803 [2024-06-11 12:09:52.628506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.803 [2024-06-11 12:09:52.628513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.803 [2024-06-11 12:09:52.628519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.803 [2024-06-11 12:09:52.628525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.803 [2024-06-11 12:09:52.631024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:39.803 [2024-06-11 12:09:52.631038] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:39.803 [2024-06-11 12:09:52.631585] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:39.803 [2024-06-11 12:09:52.631591] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:39.803 [2024-06-11 12:09:52.632565] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:39.803 [2024-06-11 12:09:52.632575] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:39.803 [2024-06-11 12:09:52.632638] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:39.803 [2024-06-11 12:09:52.636025] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:39.803 (-273 Celsius) 00:15:39.803 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:39.803 Available Spare: 0% 00:15:39.803 Available Spare Threshold: 0% 00:15:39.803 Life Percentage Used: 0% 00:15:39.803 Data Units Read: 0 00:15:39.803 Data Units Written: 0 00:15:39.803 Host Read Commands: 0 00:15:39.803 Host Write Commands: 0 00:15:39.803 Controller Busy Time: 0 minutes 00:15:39.803 Power Cycles: 0 00:15:39.803 Power On Hours: 0 hours 00:15:39.803 Unsafe Shutdowns: 0 00:15:39.803 Unrecoverable Media Errors: 0 00:15:39.803 Lifetime Error Log Entries: 0 00:15:39.803 Warning Temperature Time: 0 minutes 00:15:39.803 Critical Temperature Time: 0 minutes 00:15:39.803 00:15:39.803 Number of Queues 00:15:39.803 ================ 00:15:39.803 Number of I/O Submission Queues: 127 00:15:39.803 Number of I/O Completion Queues: 127 00:15:39.803 00:15:39.803 Active Namespaces 00:15:39.803 ================= 00:15:39.803 Namespace ID:1 00:15:39.803 Error Recovery Timeout: Unlimited 00:15:39.803 Command Set Identifier: NVM (00h) 00:15:39.803 Deallocate: Supported 00:15:39.803 Deallocated/Unwritten Error: Not Supported 00:15:39.803 Deallocated Read Value: Unknown 00:15:39.803 Deallocate in Write Zeroes: Not Supported 00:15:39.803 Deallocated Guard Field: 0xFFFF 00:15:39.803 Flush: Supported 00:15:39.803 Reservation: Supported 00:15:39.803 Namespace Sharing Capabilities: Multiple Controllers 00:15:39.803 Size (in LBAs): 131072 (0GiB) 00:15:39.803 Capacity (in LBAs): 131072 (0GiB) 00:15:39.803 Utilization (in LBAs): 131072 (0GiB) 00:15:39.803 NGUID: 2771E1AACCF84AF4B8205C51A8A00271 00:15:39.803 UUID: 2771e1aa-ccf8-4af4-b820-5c51a8a00271 00:15:39.803 Thin Provisioning: Not Supported 00:15:39.804 Per-NS Atomic Units: Yes 00:15:39.804 Atomic Boundary Size (Normal): 0 00:15:39.804 Atomic Boundary Size (PFail): 0 00:15:39.804 Atomic Boundary Offset: 0 00:15:39.804 Maximum Single Source Range Length: 65535 00:15:39.804 Maximum Copy Length: 65535 00:15:39.804 Maximum Source Range Count: 1 00:15:39.804 NGUID/EUI64 Never Reused: No 00:15:39.804 Namespace Write Protected: No 00:15:39.804 Number of LBA Formats: 1 00:15:39.804 Current LBA Format: LBA Format #00 00:15:39.804 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:39.804 00:15:39.804 12:09:52 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:39.804 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.141 Initializing NVMe Controllers 00:15:45.141 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.141 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:45.141 Initialization complete. Launching workers. 00:15:45.141 ======================================================== 00:15:45.142 Latency(us) 00:15:45.142 Device Information : IOPS MiB/s Average min max 00:15:45.142 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39924.65 155.96 3205.72 833.50 6838.71 00:15:45.142 ======================================================== 00:15:45.142 Total : 39924.65 155.96 3205.72 833.50 6838.71 00:15:45.142 00:15:45.142 12:09:57 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:45.142 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.427 Initializing NVMe Controllers 00:15:50.427 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:50.427 Initialization complete. Launching workers. 00:15:50.427 ======================================================== 00:15:50.427 Latency(us) 00:15:50.427 Device Information : IOPS MiB/s Average min max 00:15:50.427 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16015.20 62.56 7991.89 6988.30 15967.05 00:15:50.427 ======================================================== 00:15:50.427 Total : 16015.20 62.56 7991.89 6988.30 15967.05 00:15:50.427 00:15:50.427 12:10:03 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:50.427 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.710 Initializing NVMe Controllers 00:15:55.710 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:55.710 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:55.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:55.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:55.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:55.710 Initialization complete. Launching workers. 00:15:55.710 Starting thread on core 2 00:15:55.710 Starting thread on core 3 00:15:55.710 Starting thread on core 1 00:15:55.710 12:10:08 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:55.710 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.006 Initializing NVMe Controllers 00:15:59.006 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.006 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:59.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:59.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:59.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:59.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:59.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:59.006 Initialization complete. Launching workers. 00:15:59.006 Starting thread on core 1 with urgent priority queue 00:15:59.007 Starting thread on core 2 with urgent priority queue 00:15:59.007 Starting thread on core 3 with urgent priority queue 00:15:59.007 Starting thread on core 0 with urgent priority queue 00:15:59.007 SPDK bdev Controller (SPDK1 ) core 0: 8618.67 IO/s 11.60 secs/100000 ios 00:15:59.007 SPDK bdev Controller (SPDK1 ) core 1: 16160.00 IO/s 6.19 secs/100000 ios 00:15:59.007 SPDK bdev Controller (SPDK1 ) core 2: 7724.00 IO/s 12.95 secs/100000 ios 00:15:59.007 SPDK bdev Controller (SPDK1 ) core 3: 14614.67 IO/s 6.84 secs/100000 ios 00:15:59.007 ======================================================== 00:15:59.007 00:15:59.007 12:10:11 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:59.007 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.007 Initializing NVMe Controllers 00:15:59.007 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.007 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:59.007 Namespace ID: 1 size: 0GB 00:15:59.007 Initialization complete. 00:15:59.007 INFO: using host memory buffer for IO 00:15:59.007 Hello world! 00:15:59.007 12:10:11 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:59.007 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.398 Initializing NVMe Controllers 00:16:00.398 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.398 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.398 Initialization complete. Launching workers. 00:16:00.398 submit (in ns) avg, min, max = 7746.4, 3854.2, 3999890.8 00:16:00.398 complete (in ns) avg, min, max = 18444.1, 2357.5, 3999812.5 00:16:00.398 00:16:00.398 Submit histogram 00:16:00.398 ================ 00:16:00.398 Range in us Cumulative Count 00:16:00.398 3.840 - 3.867: 0.3374% ( 65) 00:16:00.398 3.867 - 3.893: 3.7739% ( 662) 00:16:00.398 3.893 - 3.920: 10.3717% ( 1271) 00:16:00.398 3.920 - 3.947: 21.2157% ( 2089) 00:16:00.398 3.947 - 3.973: 33.6742% ( 2400) 00:16:00.398 3.973 - 4.000: 45.6292% ( 2303) 00:16:00.398 4.000 - 4.027: 63.0191% ( 3350) 00:16:00.398 4.027 - 4.053: 78.3794% ( 2959) 00:16:00.398 4.053 - 4.080: 89.4778% ( 2138) 00:16:00.398 4.080 - 4.107: 95.2346% ( 1109) 00:16:00.398 4.107 - 4.133: 97.8405% ( 502) 00:16:00.398 4.133 - 4.160: 98.8943% ( 203) 00:16:00.398 4.160 - 4.187: 99.3044% ( 79) 00:16:00.398 4.187 - 4.213: 99.4394% ( 26) 00:16:00.398 4.213 - 4.240: 99.4549% ( 3) 00:16:00.398 4.240 - 4.267: 99.4601% ( 1) 00:16:00.398 4.320 - 4.347: 99.4653% ( 1) 00:16:00.398 4.373 - 4.400: 99.4705% ( 1) 00:16:00.398 4.427 - 4.453: 99.4757% ( 1) 00:16:00.398 4.533 - 4.560: 99.4809% ( 1) 00:16:00.398 4.640 - 4.667: 99.4861% ( 1) 00:16:00.398 4.693 - 4.720: 99.4913% ( 1) 00:16:00.398 4.880 - 4.907: 99.4965% ( 1) 00:16:00.398 4.933 - 4.960: 99.5017% ( 1) 00:16:00.398 5.013 - 5.040: 99.5069% ( 1) 00:16:00.398 5.067 - 5.093: 99.5120% ( 1) 00:16:00.398 5.147 - 5.173: 99.5172% ( 1) 00:16:00.398 5.227 - 5.253: 99.5224% ( 1) 00:16:00.398 5.307 - 5.333: 99.5276% ( 1) 00:16:00.398 5.333 - 5.360: 99.5380% ( 2) 00:16:00.398 5.413 - 5.440: 99.5432% ( 1) 00:16:00.398 5.600 - 5.627: 99.5484% ( 1) 00:16:00.398 5.733 - 5.760: 99.5536% ( 1) 00:16:00.398 5.787 - 5.813: 99.5588% ( 1) 00:16:00.398 5.840 - 5.867: 99.5640% ( 1) 00:16:00.398 5.867 - 5.893: 99.5743% ( 2) 00:16:00.398 5.920 - 5.947: 99.5847% ( 2) 00:16:00.398 5.973 - 6.000: 99.5899% ( 1) 00:16:00.398 6.000 - 6.027: 99.6003% ( 2) 00:16:00.398 6.027 - 6.053: 99.6107% ( 2) 00:16:00.398 6.053 - 6.080: 99.6211% ( 2) 00:16:00.398 6.080 - 6.107: 99.6314% ( 2) 00:16:00.398 6.107 - 6.133: 99.6366% ( 1) 00:16:00.398 6.133 - 6.160: 99.6418% ( 1) 00:16:00.398 6.187 - 6.213: 99.6470% ( 1) 00:16:00.398 6.213 - 6.240: 99.6678% ( 4) 00:16:00.398 6.240 - 6.267: 99.6730% ( 1) 00:16:00.398 6.267 - 6.293: 99.6833% ( 2) 00:16:00.398 6.347 - 6.373: 99.6885% ( 1) 00:16:00.398 6.400 - 6.427: 99.6989% ( 2) 00:16:00.398 6.453 - 6.480: 99.7093% ( 2) 00:16:00.398 6.480 - 6.507: 99.7197% ( 2) 00:16:00.398 6.560 - 6.587: 99.7249% ( 1) 00:16:00.398 6.613 - 6.640: 99.7404% ( 3) 00:16:00.398 6.640 - 6.667: 99.7456% ( 1) 00:16:00.398 6.667 - 6.693: 99.7560% ( 2) 00:16:00.398 6.693 - 6.720: 99.7664% ( 2) 00:16:00.398 6.720 - 6.747: 99.7924% ( 5) 00:16:00.398 6.773 - 6.800: 99.7975% ( 1) 00:16:00.398 6.800 - 6.827: 99.8027% ( 1) 00:16:00.398 6.880 - 6.933: 99.8079% ( 1) 00:16:00.398 7.093 - 7.147: 99.8131% ( 1) 00:16:00.398 7.253 - 7.307: 99.8287% ( 3) 00:16:00.398 7.573 - 7.627: 99.8339% ( 1) 00:16:00.398 7.733 - 7.787: 99.8391% ( 1) 00:16:00.398 7.787 - 7.840: 99.8443% ( 1) 00:16:00.398 7.893 - 7.947: 99.8495% ( 1) 00:16:00.398 8.000 - 8.053: 99.8547% ( 1) 00:16:00.398 8.053 - 8.107: 99.8598% ( 1) 00:16:00.398 8.320 - 8.373: 99.8650% ( 1) 00:16:00.398 8.427 - 8.480: 99.8702% ( 1) 00:16:00.398 8.960 - 9.013: 99.8806% ( 2) 00:16:00.398 10.453 - 10.507: 99.8858% ( 1) 00:16:00.398 10.507 - 10.560: 99.8910% ( 1) 00:16:00.398 12.533 - 12.587: 99.8962% ( 1) 00:16:00.398 12.960 - 13.013: 99.9014% ( 1) 00:16:00.398 15.573 - 15.680: 99.9066% ( 1) 00:16:00.398 3986.773 - 4014.080: 100.0000% ( 18) 00:16:00.398 00:16:00.398 Complete histogram 00:16:00.398 ================== 00:16:00.398 Range in us Cumulative Count 00:16:00.398 2.347 - 2.360: 0.0052% ( 1) 00:16:00.398 2.360 - 2.373: 0.1246% ( 23) 00:16:00.398 2.373 - 2.387: 1.5210% ( 269) 00:16:00.398 2.387 - 2.400: 1.6196% ( 19) 00:16:00.398 2.400 - 2.413: 1.8065% ( 36) 00:16:00.398 2.413 - 2.427: 1.8688% ( 12) 00:16:00.398 2.427 - 2.440: 14.5245% ( 2438) 00:16:00.398 2.440 - 2.453: 57.0546% ( 8193) 00:16:00.398 2.453 - 2.467: 67.4990% ( 2012) 00:16:00.398 2.467 - 2.480: 75.8306% ( 1605) 00:16:00.398 2.480 - 2.493: 80.3935% ( 879) 00:16:00.398 2.493 - 2.507: 82.0131% ( 312) 00:16:00.398 2.507 - 2.520: 87.3962% ( 1037) 00:16:00.398 2.520 - 2.533: 93.3347% ( 1144) 00:16:00.398 2.533 - 2.547: 96.4234% ( 595) 00:16:00.398 2.547 - 2.560: 98.0118% ( 306) 00:16:00.398 2.560 - 2.573: 98.9826% ( 187) 00:16:00.398 2.573 - 2.587: 99.2784% ( 57) 00:16:00.398 2.587 - 2.600: 99.3407% ( 12) 00:16:00.398 2.600 - 2.613: 99.3459% ( 1) 00:16:00.398 2.640 - 2.653: 99.3511% ( 1) 00:16:00.398 4.213 - 4.240: 99.3563% ( 1) 00:16:00.398 4.267 - 4.293: 99.3615% ( 1) 00:16:00.398 4.347 - 4.373: 99.3667% ( 1) 00:16:00.398 4.453 - 4.480: 99.3771% ( 2) 00:16:00.398 4.480 - 4.507: 99.3823% ( 1) 00:16:00.398 4.507 - 4.533: 99.3875% ( 1) 00:16:00.398 4.533 - 4.560: 99.3926% ( 1) 00:16:00.398 4.773 - 4.800: 99.4030% ( 2) 00:16:00.398 4.800 - 4.827: 99.4134% ( 2) 00:16:00.398 4.827 - 4.853: 99.4238% ( 2) 00:16:00.398 4.853 - 4.880: 99.4342% ( 2) 00:16:00.398 4.880 - 4.907: 99.4498% ( 3) 00:16:00.398 4.987 - 5.013: 99.4549% ( 1) 00:16:00.398 5.013 - 5.040: 99.4601% ( 1) 00:16:00.398 5.040 - 5.067: 99.4809% ( 4) 00:16:00.398 5.067 - 5.093: 99.4861% ( 1) 00:16:00.398 5.120 - 5.147: 99.4913% ( 1) 00:16:00.398 5.413 - 5.440: 99.4965% ( 1) 00:16:00.398 5.440 - 5.467: 99.5017% ( 1) 00:16:00.398 5.467 - 5.493: 99.5069% ( 1) 00:16:00.398 5.520 - 5.547: 99.5172% ( 2) 00:16:00.398 5.547 - 5.573: 99.5224% ( 1) 00:16:00.398 5.573 - 5.600: 99.5276% ( 1) 00:16:00.398 5.627 - 5.653: 99.5328% ( 1) 00:16:00.398 5.653 - 5.680: 99.5380% ( 1) 00:16:00.398 5.760 - 5.787: 99.5432% ( 1) 00:16:00.398 5.920 - 5.947: 99.5484% ( 1) 00:16:00.398 6.027 - 6.053: 99.5536% ( 1) 00:16:00.398 6.160 - 6.187: 99.5588% ( 1) 00:16:00.398 6.347 - 6.373: 99.5640% ( 1) 00:16:00.398 6.987 - 7.040: 99.5691% ( 1) 00:16:00.398 7.467 - 7.520: 99.5743% ( 1) 00:16:00.398 10.827 - 10.880: 99.5795% ( 1) 00:16:00.398 13.067 - 13.120: 99.5847% ( 1) 00:16:00.398 13.280 - 13.333: 99.5899% ( 1) 00:16:00.398 44.373 - 44.587: 99.5951% ( 1) 00:16:00.398 80.640 - 81.067: 99.6003% ( 1) 00:16:00.398 3986.773 - 4014.080: 100.0000% ( 77) 00:16:00.398 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.399 [2024-06-11 12:10:13.380009] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:00.399 [ 00:16:00.399 { 00:16:00.399 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.399 "subtype": "Discovery", 00:16:00.399 "listen_addresses": [], 00:16:00.399 "allow_any_host": true, 00:16:00.399 "hosts": [] 00:16:00.399 }, 00:16:00.399 { 00:16:00.399 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.399 "subtype": "NVMe", 00:16:00.399 "listen_addresses": [ 00:16:00.399 { 00:16:00.399 "transport": "VFIOUSER", 00:16:00.399 "trtype": "VFIOUSER", 00:16:00.399 "adrfam": "IPv4", 00:16:00.399 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.399 "trsvcid": "0" 00:16:00.399 } 00:16:00.399 ], 00:16:00.399 "allow_any_host": true, 00:16:00.399 "hosts": [], 00:16:00.399 "serial_number": "SPDK1", 00:16:00.399 "model_number": "SPDK bdev Controller", 00:16:00.399 "max_namespaces": 32, 00:16:00.399 "min_cntlid": 1, 00:16:00.399 "max_cntlid": 65519, 00:16:00.399 "namespaces": [ 00:16:00.399 { 00:16:00.399 "nsid": 1, 00:16:00.399 "bdev_name": "Malloc1", 00:16:00.399 "name": "Malloc1", 00:16:00.399 "nguid": "2771E1AACCF84AF4B8205C51A8A00271", 00:16:00.399 "uuid": "2771e1aa-ccf8-4af4-b820-5c51a8a00271" 00:16:00.399 } 00:16:00.399 ] 00:16:00.399 }, 00:16:00.399 { 00:16:00.399 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.399 "subtype": "NVMe", 00:16:00.399 "listen_addresses": [ 00:16:00.399 { 00:16:00.399 "transport": "VFIOUSER", 00:16:00.399 "trtype": "VFIOUSER", 00:16:00.399 "adrfam": "IPv4", 00:16:00.399 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.399 "trsvcid": "0" 00:16:00.399 } 00:16:00.399 ], 00:16:00.399 "allow_any_host": true, 00:16:00.399 "hosts": [], 00:16:00.399 "serial_number": "SPDK2", 00:16:00.399 "model_number": "SPDK bdev Controller", 00:16:00.399 "max_namespaces": 32, 00:16:00.399 "min_cntlid": 1, 00:16:00.399 "max_cntlid": 65519, 00:16:00.399 "namespaces": [ 00:16:00.399 { 00:16:00.399 "nsid": 1, 00:16:00.399 "bdev_name": "Malloc2", 00:16:00.399 "name": "Malloc2", 00:16:00.399 "nguid": "995766A33C6848F09E6EF2108F387571", 00:16:00.399 "uuid": "995766a3-3c68-48f0-9e6e-f2108f387571" 00:16:00.399 } 00:16:00.399 ] 00:16:00.399 } 00:16:00.399 ] 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1429518 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:00.399 12:10:13 -- common/autotest_common.sh@1244 -- # local i=0 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:00.399 12:10:13 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.399 12:10:13 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:00.399 12:10:13 -- common/autotest_common.sh@1255 -- # return 0 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:00.399 12:10:13 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:00.659 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.659 Malloc3 00:16:00.659 12:10:13 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:00.920 12:10:13 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:00.920 Asynchronous Event Request test 00:16:00.920 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.920 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:00.920 Registering asynchronous event callbacks... 00:16:00.920 Starting namespace attribute notice tests for all controllers... 00:16:00.920 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:00.920 aer_cb - Changed Namespace 00:16:00.920 Cleaning up... 00:16:00.920 [ 00:16:00.920 { 00:16:00.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:00.920 "subtype": "Discovery", 00:16:00.920 "listen_addresses": [], 00:16:00.920 "allow_any_host": true, 00:16:00.920 "hosts": [] 00:16:00.920 }, 00:16:00.920 { 00:16:00.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:00.920 "subtype": "NVMe", 00:16:00.920 "listen_addresses": [ 00:16:00.920 { 00:16:00.920 "transport": "VFIOUSER", 00:16:00.920 "trtype": "VFIOUSER", 00:16:00.920 "adrfam": "IPv4", 00:16:00.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:00.920 "trsvcid": "0" 00:16:00.920 } 00:16:00.920 ], 00:16:00.920 "allow_any_host": true, 00:16:00.920 "hosts": [], 00:16:00.920 "serial_number": "SPDK1", 00:16:00.920 "model_number": "SPDK bdev Controller", 00:16:00.920 "max_namespaces": 32, 00:16:00.920 "min_cntlid": 1, 00:16:00.920 "max_cntlid": 65519, 00:16:00.920 "namespaces": [ 00:16:00.920 { 00:16:00.920 "nsid": 1, 00:16:00.920 "bdev_name": "Malloc1", 00:16:00.920 "name": "Malloc1", 00:16:00.920 "nguid": "2771E1AACCF84AF4B8205C51A8A00271", 00:16:00.920 "uuid": "2771e1aa-ccf8-4af4-b820-5c51a8a00271" 00:16:00.920 }, 00:16:00.920 { 00:16:00.920 "nsid": 2, 00:16:00.920 "bdev_name": "Malloc3", 00:16:00.920 "name": "Malloc3", 00:16:00.920 "nguid": "7CF1D629AFCC43008151AF2DBCE0E7CB", 00:16:00.920 "uuid": "7cf1d629-afcc-4300-8151-af2dbce0e7cb" 00:16:00.920 } 00:16:00.920 ] 00:16:00.920 }, 00:16:00.920 { 00:16:00.920 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:00.920 "subtype": "NVMe", 00:16:00.920 "listen_addresses": [ 00:16:00.920 { 00:16:00.920 "transport": "VFIOUSER", 00:16:00.920 "trtype": "VFIOUSER", 00:16:00.920 "adrfam": "IPv4", 00:16:00.920 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:00.920 "trsvcid": "0" 00:16:00.920 } 00:16:00.920 ], 00:16:00.920 "allow_any_host": true, 00:16:00.920 "hosts": [], 00:16:00.920 "serial_number": "SPDK2", 00:16:00.920 "model_number": "SPDK bdev Controller", 00:16:00.920 "max_namespaces": 32, 00:16:00.920 "min_cntlid": 1, 00:16:00.920 "max_cntlid": 65519, 00:16:00.920 "namespaces": [ 00:16:00.920 { 00:16:00.920 "nsid": 1, 00:16:00.920 "bdev_name": "Malloc2", 00:16:00.920 "name": "Malloc2", 00:16:00.920 "nguid": "995766A33C6848F09E6EF2108F387571", 00:16:00.920 "uuid": "995766a3-3c68-48f0-9e6e-f2108f387571" 00:16:00.920 } 00:16:00.920 ] 00:16:00.920 } 00:16:00.920 ] 00:16:00.920 12:10:13 -- target/nvmf_vfio_user.sh@44 -- # wait 1429518 00:16:00.920 12:10:13 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:00.920 12:10:13 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:00.920 12:10:13 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:00.920 12:10:13 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:00.920 [2024-06-11 12:10:13.936731] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:00.920 [2024-06-11 12:10:13.936766] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429671 ] 00:16:00.920 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.183 [2024-06-11 12:10:13.967525] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:01.183 [2024-06-11 12:10:13.976214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.183 [2024-06-11 12:10:13.976235] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5c2fcde000 00:16:01.183 [2024-06-11 12:10:13.977211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.978217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.979219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.980224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.981233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.982241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.983252] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.984257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:01.183 [2024-06-11 12:10:13.985268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:01.183 [2024-06-11 12:10:13.985280] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5c2eaa4000 00:16:01.183 [2024-06-11 12:10:13.986604] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.183 [2024-06-11 12:10:14.002805] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:01.183 [2024-06-11 12:10:14.002828] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:01.183 [2024-06-11 12:10:14.007922] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.183 [2024-06-11 12:10:14.007965] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:01.183 [2024-06-11 12:10:14.008045] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:01.183 [2024-06-11 12:10:14.008059] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:01.183 [2024-06-11 12:10:14.008065] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:01.183 [2024-06-11 12:10:14.008925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:01.183 [2024-06-11 12:10:14.008937] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:01.183 [2024-06-11 12:10:14.008944] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:01.183 [2024-06-11 12:10:14.009934] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:01.183 [2024-06-11 12:10:14.009947] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:01.183 [2024-06-11 12:10:14.009954] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:01.183 [2024-06-11 12:10:14.010942] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:01.183 [2024-06-11 12:10:14.010952] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:01.183 [2024-06-11 12:10:14.011949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:01.183 [2024-06-11 12:10:14.011958] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:01.183 [2024-06-11 12:10:14.011965] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:01.183 [2024-06-11 12:10:14.011972] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:01.183 [2024-06-11 12:10:14.012078] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:01.183 [2024-06-11 12:10:14.012083] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:01.183 [2024-06-11 12:10:14.012089] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:01.183 [2024-06-11 12:10:14.012954] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:01.183 [2024-06-11 12:10:14.013957] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:01.183 [2024-06-11 12:10:14.014961] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.183 [2024-06-11 12:10:14.015978] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:01.183 [2024-06-11 12:10:14.016969] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:01.183 [2024-06-11 12:10:14.016977] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:01.183 [2024-06-11 12:10:14.016982] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:01.183 [2024-06-11 12:10:14.017003] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:01.183 [2024-06-11 12:10:14.017014] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:01.183 [2024-06-11 12:10:14.017029] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.183 [2024-06-11 12:10:14.017034] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.183 [2024-06-11 12:10:14.017045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.183 [2024-06-11 12:10:14.021025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:01.183 [2024-06-11 12:10:14.021036] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:01.183 [2024-06-11 12:10:14.021043] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:01.183 [2024-06-11 12:10:14.021047] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:01.183 [2024-06-11 12:10:14.021052] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:01.184 [2024-06-11 12:10:14.021056] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:01.184 [2024-06-11 12:10:14.021061] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:01.184 [2024-06-11 12:10:14.021065] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.021074] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.021084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.029022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.029033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.184 [2024-06-11 12:10:14.029042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.184 [2024-06-11 12:10:14.029050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.184 [2024-06-11 12:10:14.029061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.184 [2024-06-11 12:10:14.029066] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.029074] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.029083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.037022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.037029] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:01.184 [2024-06-11 12:10:14.037034] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.037041] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.037048] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.037057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.045023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.045072] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.045079] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.045087] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:01.184 [2024-06-11 12:10:14.045091] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:01.184 [2024-06-11 12:10:14.045097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.052026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.052037] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:01.184 [2024-06-11 12:10:14.052045] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.052053] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.052059] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.184 [2024-06-11 12:10:14.052064] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.184 [2024-06-11 12:10:14.052070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.061022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.061036] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.061045] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.061052] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:01.184 [2024-06-11 12:10:14.061056] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.184 [2024-06-11 12:10:14.061063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.069022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.069031] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.069037] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.069045] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.069051] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.069056] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.069061] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:01.184 [2024-06-11 12:10:14.069065] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:01.184 [2024-06-11 12:10:14.069070] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:01.184 [2024-06-11 12:10:14.069085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.077022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.077035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.085021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.085034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.093022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.093034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.101023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.101034] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:01.184 [2024-06-11 12:10:14.101039] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:01.184 [2024-06-11 12:10:14.101042] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:01.184 [2024-06-11 12:10:14.101046] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:01.184 [2024-06-11 12:10:14.101052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:01.184 [2024-06-11 12:10:14.101062] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:01.184 [2024-06-11 12:10:14.101066] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:01.184 [2024-06-11 12:10:14.101072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.101079] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:01.184 [2024-06-11 12:10:14.101083] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:01.184 [2024-06-11 12:10:14.101089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.101096] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:01.184 [2024-06-11 12:10:14.101100] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:01.184 [2024-06-11 12:10:14.101106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:01.184 [2024-06-11 12:10:14.109022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.109037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.109046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:01.184 [2024-06-11 12:10:14.109053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:01.184 ===================================================== 00:16:01.184 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.184 ===================================================== 00:16:01.184 Controller Capabilities/Features 00:16:01.184 ================================ 00:16:01.184 Vendor ID: 4e58 00:16:01.185 Subsystem Vendor ID: 4e58 00:16:01.185 Serial Number: SPDK2 00:16:01.185 Model Number: SPDK bdev Controller 00:16:01.185 Firmware Version: 24.01.1 00:16:01.185 Recommended Arb Burst: 6 00:16:01.185 IEEE OUI Identifier: 8d 6b 50 00:16:01.185 Multi-path I/O 00:16:01.185 May have multiple subsystem ports: Yes 00:16:01.185 May have multiple controllers: Yes 00:16:01.185 Associated with SR-IOV VF: No 00:16:01.185 Max Data Transfer Size: 131072 00:16:01.185 Max Number of Namespaces: 32 00:16:01.185 Max Number of I/O Queues: 127 00:16:01.185 NVMe Specification Version (VS): 1.3 00:16:01.185 NVMe Specification Version (Identify): 1.3 00:16:01.185 Maximum Queue Entries: 256 00:16:01.185 Contiguous Queues Required: Yes 00:16:01.185 Arbitration Mechanisms Supported 00:16:01.185 Weighted Round Robin: Not Supported 00:16:01.185 Vendor Specific: Not Supported 00:16:01.185 Reset Timeout: 15000 ms 00:16:01.185 Doorbell Stride: 4 bytes 00:16:01.185 NVM Subsystem Reset: Not Supported 00:16:01.185 Command Sets Supported 00:16:01.185 NVM Command Set: Supported 00:16:01.185 Boot Partition: Not Supported 00:16:01.185 Memory Page Size Minimum: 4096 bytes 00:16:01.185 Memory Page Size Maximum: 4096 bytes 00:16:01.185 Persistent Memory Region: Not Supported 00:16:01.185 Optional Asynchronous Events Supported 00:16:01.185 Namespace Attribute Notices: Supported 00:16:01.185 Firmware Activation Notices: Not Supported 00:16:01.185 ANA Change Notices: Not Supported 00:16:01.185 PLE Aggregate Log Change Notices: Not Supported 00:16:01.185 LBA Status Info Alert Notices: Not Supported 00:16:01.185 EGE Aggregate Log Change Notices: Not Supported 00:16:01.185 Normal NVM Subsystem Shutdown event: Not Supported 00:16:01.185 Zone Descriptor Change Notices: Not Supported 00:16:01.185 Discovery Log Change Notices: Not Supported 00:16:01.185 Controller Attributes 00:16:01.185 128-bit Host Identifier: Supported 00:16:01.185 Non-Operational Permissive Mode: Not Supported 00:16:01.185 NVM Sets: Not Supported 00:16:01.185 Read Recovery Levels: Not Supported 00:16:01.185 Endurance Groups: Not Supported 00:16:01.185 Predictable Latency Mode: Not Supported 00:16:01.185 Traffic Based Keep ALive: Not Supported 00:16:01.185 Namespace Granularity: Not Supported 00:16:01.185 SQ Associations: Not Supported 00:16:01.185 UUID List: Not Supported 00:16:01.185 Multi-Domain Subsystem: Not Supported 00:16:01.185 Fixed Capacity Management: Not Supported 00:16:01.185 Variable Capacity Management: Not Supported 00:16:01.185 Delete Endurance Group: Not Supported 00:16:01.185 Delete NVM Set: Not Supported 00:16:01.185 Extended LBA Formats Supported: Not Supported 00:16:01.185 Flexible Data Placement Supported: Not Supported 00:16:01.185 00:16:01.185 Controller Memory Buffer Support 00:16:01.185 ================================ 00:16:01.185 Supported: No 00:16:01.185 00:16:01.185 Persistent Memory Region Support 00:16:01.185 ================================ 00:16:01.185 Supported: No 00:16:01.185 00:16:01.185 Admin Command Set Attributes 00:16:01.185 ============================ 00:16:01.185 Security Send/Receive: Not Supported 00:16:01.185 Format NVM: Not Supported 00:16:01.185 Firmware Activate/Download: Not Supported 00:16:01.185 Namespace Management: Not Supported 00:16:01.185 Device Self-Test: Not Supported 00:16:01.185 Directives: Not Supported 00:16:01.185 NVMe-MI: Not Supported 00:16:01.185 Virtualization Management: Not Supported 00:16:01.185 Doorbell Buffer Config: Not Supported 00:16:01.185 Get LBA Status Capability: Not Supported 00:16:01.185 Command & Feature Lockdown Capability: Not Supported 00:16:01.185 Abort Command Limit: 4 00:16:01.185 Async Event Request Limit: 4 00:16:01.185 Number of Firmware Slots: N/A 00:16:01.185 Firmware Slot 1 Read-Only: N/A 00:16:01.185 Firmware Activation Without Reset: N/A 00:16:01.185 Multiple Update Detection Support: N/A 00:16:01.185 Firmware Update Granularity: No Information Provided 00:16:01.185 Per-Namespace SMART Log: No 00:16:01.185 Asymmetric Namespace Access Log Page: Not Supported 00:16:01.185 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:01.185 Command Effects Log Page: Supported 00:16:01.185 Get Log Page Extended Data: Supported 00:16:01.185 Telemetry Log Pages: Not Supported 00:16:01.185 Persistent Event Log Pages: Not Supported 00:16:01.185 Supported Log Pages Log Page: May Support 00:16:01.185 Commands Supported & Effects Log Page: Not Supported 00:16:01.185 Feature Identifiers & Effects Log Page:May Support 00:16:01.185 NVMe-MI Commands & Effects Log Page: May Support 00:16:01.185 Data Area 4 for Telemetry Log: Not Supported 00:16:01.185 Error Log Page Entries Supported: 128 00:16:01.185 Keep Alive: Supported 00:16:01.185 Keep Alive Granularity: 10000 ms 00:16:01.185 00:16:01.185 NVM Command Set Attributes 00:16:01.185 ========================== 00:16:01.185 Submission Queue Entry Size 00:16:01.185 Max: 64 00:16:01.185 Min: 64 00:16:01.185 Completion Queue Entry Size 00:16:01.185 Max: 16 00:16:01.185 Min: 16 00:16:01.185 Number of Namespaces: 32 00:16:01.185 Compare Command: Supported 00:16:01.185 Write Uncorrectable Command: Not Supported 00:16:01.185 Dataset Management Command: Supported 00:16:01.185 Write Zeroes Command: Supported 00:16:01.185 Set Features Save Field: Not Supported 00:16:01.185 Reservations: Not Supported 00:16:01.185 Timestamp: Not Supported 00:16:01.185 Copy: Supported 00:16:01.185 Volatile Write Cache: Present 00:16:01.185 Atomic Write Unit (Normal): 1 00:16:01.185 Atomic Write Unit (PFail): 1 00:16:01.185 Atomic Compare & Write Unit: 1 00:16:01.185 Fused Compare & Write: Supported 00:16:01.185 Scatter-Gather List 00:16:01.185 SGL Command Set: Supported (Dword aligned) 00:16:01.185 SGL Keyed: Not Supported 00:16:01.185 SGL Bit Bucket Descriptor: Not Supported 00:16:01.185 SGL Metadata Pointer: Not Supported 00:16:01.185 Oversized SGL: Not Supported 00:16:01.185 SGL Metadata Address: Not Supported 00:16:01.185 SGL Offset: Not Supported 00:16:01.185 Transport SGL Data Block: Not Supported 00:16:01.185 Replay Protected Memory Block: Not Supported 00:16:01.185 00:16:01.185 Firmware Slot Information 00:16:01.185 ========================= 00:16:01.185 Active slot: 1 00:16:01.185 Slot 1 Firmware Revision: 24.01.1 00:16:01.185 00:16:01.185 00:16:01.185 Commands Supported and Effects 00:16:01.185 ============================== 00:16:01.185 Admin Commands 00:16:01.185 -------------- 00:16:01.185 Get Log Page (02h): Supported 00:16:01.185 Identify (06h): Supported 00:16:01.185 Abort (08h): Supported 00:16:01.185 Set Features (09h): Supported 00:16:01.185 Get Features (0Ah): Supported 00:16:01.185 Asynchronous Event Request (0Ch): Supported 00:16:01.185 Keep Alive (18h): Supported 00:16:01.185 I/O Commands 00:16:01.185 ------------ 00:16:01.185 Flush (00h): Supported LBA-Change 00:16:01.185 Write (01h): Supported LBA-Change 00:16:01.185 Read (02h): Supported 00:16:01.185 Compare (05h): Supported 00:16:01.185 Write Zeroes (08h): Supported LBA-Change 00:16:01.185 Dataset Management (09h): Supported LBA-Change 00:16:01.185 Copy (19h): Supported LBA-Change 00:16:01.185 Unknown (79h): Supported LBA-Change 00:16:01.185 Unknown (7Ah): Supported 00:16:01.185 00:16:01.185 Error Log 00:16:01.185 ========= 00:16:01.185 00:16:01.185 Arbitration 00:16:01.185 =========== 00:16:01.185 Arbitration Burst: 1 00:16:01.185 00:16:01.185 Power Management 00:16:01.185 ================ 00:16:01.185 Number of Power States: 1 00:16:01.186 Current Power State: Power State #0 00:16:01.186 Power State #0: 00:16:01.186 Max Power: 0.00 W 00:16:01.186 Non-Operational State: Operational 00:16:01.186 Entry Latency: Not Reported 00:16:01.186 Exit Latency: Not Reported 00:16:01.186 Relative Read Throughput: 0 00:16:01.186 Relative Read Latency: 0 00:16:01.186 Relative Write Throughput: 0 00:16:01.186 Relative Write Latency: 0 00:16:01.186 Idle Power: Not Reported 00:16:01.186 Active Power: Not Reported 00:16:01.186 Non-Operational Permissive Mode: Not Supported 00:16:01.186 00:16:01.186 Health Information 00:16:01.186 ================== 00:16:01.186 Critical Warnings: 00:16:01.186 Available Spare Space: OK 00:16:01.186 Temperature: OK 00:16:01.186 Device Reliability: OK 00:16:01.186 Read Only: No 00:16:01.186 Volatile Memory Backup: OK 00:16:01.186 Current Temperature: 0 Kelvin[2024-06-11 12:10:14.109152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:01.186 [2024-06-11 12:10:14.117021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:01.186 [2024-06-11 12:10:14.117049] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:01.186 [2024-06-11 12:10:14.117058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.186 [2024-06-11 12:10:14.117065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.186 [2024-06-11 12:10:14.117071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.186 [2024-06-11 12:10:14.117077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.186 [2024-06-11 12:10:14.117115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:01.186 [2024-06-11 12:10:14.117124] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:01.186 [2024-06-11 12:10:14.118156] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:01.186 [2024-06-11 12:10:14.118163] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:01.186 [2024-06-11 12:10:14.119125] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:01.186 [2024-06-11 12:10:14.119136] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:01.186 [2024-06-11 12:10:14.119184] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:01.186 [2024-06-11 12:10:14.122023] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:01.186 (-273 Celsius) 00:16:01.186 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:01.186 Available Spare: 0% 00:16:01.186 Available Spare Threshold: 0% 00:16:01.186 Life Percentage Used: 0% 00:16:01.186 Data Units Read: 0 00:16:01.186 Data Units Written: 0 00:16:01.186 Host Read Commands: 0 00:16:01.186 Host Write Commands: 0 00:16:01.186 Controller Busy Time: 0 minutes 00:16:01.186 Power Cycles: 0 00:16:01.186 Power On Hours: 0 hours 00:16:01.186 Unsafe Shutdowns: 0 00:16:01.186 Unrecoverable Media Errors: 0 00:16:01.186 Lifetime Error Log Entries: 0 00:16:01.186 Warning Temperature Time: 0 minutes 00:16:01.186 Critical Temperature Time: 0 minutes 00:16:01.186 00:16:01.186 Number of Queues 00:16:01.186 ================ 00:16:01.186 Number of I/O Submission Queues: 127 00:16:01.186 Number of I/O Completion Queues: 127 00:16:01.186 00:16:01.186 Active Namespaces 00:16:01.186 ================= 00:16:01.186 Namespace ID:1 00:16:01.186 Error Recovery Timeout: Unlimited 00:16:01.186 Command Set Identifier: NVM (00h) 00:16:01.186 Deallocate: Supported 00:16:01.186 Deallocated/Unwritten Error: Not Supported 00:16:01.186 Deallocated Read Value: Unknown 00:16:01.186 Deallocate in Write Zeroes: Not Supported 00:16:01.186 Deallocated Guard Field: 0xFFFF 00:16:01.186 Flush: Supported 00:16:01.186 Reservation: Supported 00:16:01.186 Namespace Sharing Capabilities: Multiple Controllers 00:16:01.186 Size (in LBAs): 131072 (0GiB) 00:16:01.186 Capacity (in LBAs): 131072 (0GiB) 00:16:01.186 Utilization (in LBAs): 131072 (0GiB) 00:16:01.186 NGUID: 995766A33C6848F09E6EF2108F387571 00:16:01.186 UUID: 995766a3-3c68-48f0-9e6e-f2108f387571 00:16:01.186 Thin Provisioning: Not Supported 00:16:01.186 Per-NS Atomic Units: Yes 00:16:01.186 Atomic Boundary Size (Normal): 0 00:16:01.186 Atomic Boundary Size (PFail): 0 00:16:01.186 Atomic Boundary Offset: 0 00:16:01.186 Maximum Single Source Range Length: 65535 00:16:01.186 Maximum Copy Length: 65535 00:16:01.186 Maximum Source Range Count: 1 00:16:01.186 NGUID/EUI64 Never Reused: No 00:16:01.186 Namespace Write Protected: No 00:16:01.186 Number of LBA Formats: 1 00:16:01.186 Current LBA Format: LBA Format #00 00:16:01.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:01.186 00:16:01.186 12:10:14 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:01.186 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.470 Initializing NVMe Controllers 00:16:06.470 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:06.470 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:06.470 Initialization complete. Launching workers. 00:16:06.470 ======================================================== 00:16:06.470 Latency(us) 00:16:06.470 Device Information : IOPS MiB/s Average min max 00:16:06.470 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40097.80 156.63 3194.58 833.23 7806.95 00:16:06.470 ======================================================== 00:16:06.470 Total : 40097.80 156.63 3194.58 833.23 7806.95 00:16:06.470 00:16:06.470 12:10:19 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:06.470 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.754 Initializing NVMe Controllers 00:16:11.754 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:11.754 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:11.754 Initialization complete. Launching workers. 00:16:11.754 ======================================================== 00:16:11.754 Latency(us) 00:16:11.754 Device Information : IOPS MiB/s Average min max 00:16:11.754 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37713.17 147.32 3393.85 1082.31 7716.32 00:16:11.754 ======================================================== 00:16:11.754 Total : 37713.17 147.32 3393.85 1082.31 7716.32 00:16:11.754 00:16:11.754 12:10:24 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:11.754 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.039 Initializing NVMe Controllers 00:16:17.039 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.039 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.039 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:17.039 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:17.039 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:17.039 Initialization complete. Launching workers. 00:16:17.039 Starting thread on core 2 00:16:17.039 Starting thread on core 3 00:16:17.039 Starting thread on core 1 00:16:17.039 12:10:29 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:17.039 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.336 Initializing NVMe Controllers 00:16:20.336 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.336 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:20.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:20.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:20.336 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:20.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:20.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:20.336 Initialization complete. Launching workers. 00:16:20.336 Starting thread on core 1 with urgent priority queue 00:16:20.336 Starting thread on core 2 with urgent priority queue 00:16:20.336 Starting thread on core 3 with urgent priority queue 00:16:20.336 Starting thread on core 0 with urgent priority queue 00:16:20.336 SPDK bdev Controller (SPDK2 ) core 0: 16845.67 IO/s 5.94 secs/100000 ios 00:16:20.336 SPDK bdev Controller (SPDK2 ) core 1: 12568.67 IO/s 7.96 secs/100000 ios 00:16:20.336 SPDK bdev Controller (SPDK2 ) core 2: 13643.67 IO/s 7.33 secs/100000 ios 00:16:20.336 SPDK bdev Controller (SPDK2 ) core 3: 8918.00 IO/s 11.21 secs/100000 ios 00:16:20.336 ======================================================== 00:16:20.336 00:16:20.336 12:10:33 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:20.336 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.596 Initializing NVMe Controllers 00:16:20.596 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.596 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.596 Namespace ID: 1 size: 0GB 00:16:20.596 Initialization complete. 00:16:20.596 INFO: using host memory buffer for IO 00:16:20.596 Hello world! 00:16:20.596 12:10:33 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:20.596 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.980 Initializing NVMe Controllers 00:16:21.980 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.980 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.980 Initialization complete. Launching workers. 00:16:21.980 submit (in ns) avg, min, max = 8202.1, 3863.3, 3999815.0 00:16:21.980 complete (in ns) avg, min, max = 17829.0, 2370.8, 7987228.3 00:16:21.980 00:16:21.980 Submit histogram 00:16:21.980 ================ 00:16:21.980 Range in us Cumulative Count 00:16:21.980 3.840 - 3.867: 0.0156% ( 3) 00:16:21.980 3.867 - 3.893: 0.6974% ( 131) 00:16:21.980 3.893 - 3.920: 3.7471% ( 586) 00:16:21.980 3.920 - 3.947: 10.2888% ( 1257) 00:16:21.980 3.947 - 3.973: 20.2498% ( 1914) 00:16:21.980 3.973 - 4.000: 31.9126% ( 2241) 00:16:21.980 4.000 - 4.027: 45.5842% ( 2627) 00:16:21.980 4.027 - 4.053: 61.8371% ( 3123) 00:16:21.980 4.053 - 4.080: 77.6269% ( 3034) 00:16:21.980 4.080 - 4.107: 88.8629% ( 2159) 00:16:21.980 4.107 - 4.133: 94.9154% ( 1163) 00:16:21.980 4.133 - 4.160: 97.8767% ( 569) 00:16:21.980 4.160 - 4.187: 98.9748% ( 211) 00:16:21.980 4.187 - 4.213: 99.3391% ( 70) 00:16:21.980 4.213 - 4.240: 99.4379% ( 19) 00:16:21.980 4.240 - 4.267: 99.4431% ( 1) 00:16:21.980 4.267 - 4.293: 99.4588% ( 3) 00:16:21.980 4.293 - 4.320: 99.4744% ( 3) 00:16:21.980 4.507 - 4.533: 99.4796% ( 1) 00:16:21.980 5.013 - 5.040: 99.4848% ( 1) 00:16:21.980 5.040 - 5.067: 99.4900% ( 1) 00:16:21.980 5.600 - 5.627: 99.4952% ( 1) 00:16:21.980 5.787 - 5.813: 99.5004% ( 1) 00:16:21.980 5.867 - 5.893: 99.5056% ( 1) 00:16:21.980 5.920 - 5.947: 99.5108% ( 1) 00:16:21.980 6.027 - 6.053: 99.5160% ( 1) 00:16:21.980 6.053 - 6.080: 99.5212% ( 1) 00:16:21.980 6.133 - 6.160: 99.5316% ( 2) 00:16:21.980 6.160 - 6.187: 99.5420% ( 2) 00:16:21.980 6.187 - 6.213: 99.5472% ( 1) 00:16:21.980 6.213 - 6.240: 99.5524% ( 1) 00:16:21.980 6.240 - 6.267: 99.5576% ( 1) 00:16:21.980 6.293 - 6.320: 99.5628% ( 1) 00:16:21.980 6.347 - 6.373: 99.5680% ( 1) 00:16:21.980 6.400 - 6.427: 99.5733% ( 1) 00:16:21.980 6.427 - 6.453: 99.5785% ( 1) 00:16:21.980 6.453 - 6.480: 99.5837% ( 1) 00:16:21.980 6.507 - 6.533: 99.5889% ( 1) 00:16:21.980 6.560 - 6.587: 99.5941% ( 1) 00:16:21.980 6.587 - 6.613: 99.5993% ( 1) 00:16:21.980 6.613 - 6.640: 99.6045% ( 1) 00:16:21.980 6.640 - 6.667: 99.6097% ( 1) 00:16:21.980 6.667 - 6.693: 99.6149% ( 1) 00:16:21.980 6.693 - 6.720: 99.6201% ( 1) 00:16:21.980 6.720 - 6.747: 99.6253% ( 1) 00:16:21.980 6.747 - 6.773: 99.6357% ( 2) 00:16:21.980 6.800 - 6.827: 99.6617% ( 5) 00:16:21.980 6.827 - 6.880: 99.6721% ( 2) 00:16:21.980 6.880 - 6.933: 99.6982% ( 5) 00:16:21.980 6.987 - 7.040: 99.7138% ( 3) 00:16:21.980 7.093 - 7.147: 99.7242% ( 2) 00:16:21.980 7.147 - 7.200: 99.7346% ( 2) 00:16:21.980 7.200 - 7.253: 99.7450% ( 2) 00:16:21.980 7.307 - 7.360: 99.7606% ( 3) 00:16:21.980 7.360 - 7.413: 99.7658% ( 1) 00:16:21.980 7.413 - 7.467: 99.7710% ( 1) 00:16:21.980 7.467 - 7.520: 99.7918% ( 4) 00:16:21.980 7.520 - 7.573: 99.8074% ( 3) 00:16:21.980 7.627 - 7.680: 99.8231% ( 3) 00:16:21.980 7.733 - 7.787: 99.8283% ( 1) 00:16:21.980 7.787 - 7.840: 99.8335% ( 1) 00:16:21.980 7.893 - 7.947: 99.8387% ( 1) 00:16:21.980 8.000 - 8.053: 99.8439% ( 1) 00:16:21.980 8.160 - 8.213: 99.8491% ( 1) 00:16:21.980 8.213 - 8.267: 99.8543% ( 1) 00:16:21.980 8.267 - 8.320: 99.8595% ( 1) 00:16:21.980 8.533 - 8.587: 99.8647% ( 1) 00:16:21.980 9.013 - 9.067: 99.8699% ( 1) 00:16:21.980 12.373 - 12.427: 99.8751% ( 1) 00:16:21.980 12.427 - 12.480: 99.8803% ( 1) 00:16:21.980 14.933 - 15.040: 99.8855% ( 1) 00:16:21.980 15.040 - 15.147: 99.8959% ( 2) 00:16:21.980 3986.773 - 4014.080: 100.0000% ( 20) 00:16:21.980 00:16:21.980 Complete histogram 00:16:21.980 ================== 00:16:21.980 Range in us Cumulative Count 00:16:21.980 2.360 - 2.373: 0.0052% ( 1) 00:16:21.980 2.373 - 2.387: 14.3794% ( 2762) 00:16:21.980 2.387 - 2.400: 17.5020% ( 600) 00:16:21.980 2.400 - 2.413: 19.6097% ( 405) 00:16:21.980 2.413 - 2.427: 56.7942% ( 7145) 00:16:21.980 2.427 - 2.440: 66.9945% ( 1960) 00:16:21.980 2.440 - 2.453: 74.7177% ( 1484) 00:16:21.980 2.453 - 2.467: 80.5881% ( 1128) 00:16:21.980 2.467 - 2.480: 84.1218% ( 679) 00:16:21.980 2.480 - 2.493: 86.8852% ( 531) 00:16:21.980 2.493 - 2.507: 91.9698% ( 977) 00:16:21.980 2.507 - 2.520: 95.9407% ( 763) 00:16:21.980 2.520 - 2.533: 97.6737% ( 333) 00:16:21.980 2.533 - 2.547: 98.6521% ( 188) 00:16:21.980 2.547 - 2.560: 99.1413% ( 94) 00:16:21.980 2.560 - 2.573: 99.3078% ( 32) 00:16:21.980 2.573 - 2.587: 99.3339% ( 5) 00:16:21.980 2.587 - 2.600: 99.3443% ( 2) 00:16:21.980 2.613 - 2.627: 99.3495% ( 1) 00:16:21.980 2.680 - 2.693: 99.3547% ( 1) 00:16:21.980 4.427 - 4.453: 99.3599% ( 1) 00:16:21.980 4.533 - 4.560: 99.3651% ( 1) 00:16:21.980 4.800 - 4.827: 99.3703% ( 1) 00:16:21.980 4.827 - 4.853: 99.3755% ( 1) 00:16:21.980 4.880 - 4.907: 99.3807% ( 1) 00:16:21.980 4.987 - 5.013: 99.3911% ( 2) 00:16:21.980 5.067 - 5.093: 99.3963% ( 1) 00:16:21.980 5.093 - 5.120: 99.4015% ( 1) 00:16:21.980 5.120 - 5.147: 99.4275% ( 5) 00:16:21.981 5.200 - 5.227: 99.4327% ( 1) 00:16:21.981 5.227 - 5.253: 99.4379% ( 1) 00:16:21.981 5.280 - 5.307: 99.4431% ( 1) 00:16:21.981 5.307 - 5.333: 99.4483% ( 1) 00:16:21.981 5.333 - 5.360: 99.4588% ( 2) 00:16:21.981 5.413 - 5.440: 99.4692% ( 2) 00:16:21.981 5.467 - 5.493: 99.4796% ( 2) 00:16:21.981 5.493 - 5.520: 99.4900% ( 2) 00:16:21.981 5.520 - 5.547: 99.5004% ( 2) 00:16:21.981 5.680 - 5.707: 99.5108% ( 2) 00:16:21.981 5.733 - 5.760: 99.5160% ( 1) 00:16:21.981 5.787 - 5.813: 99.5212% ( 1) 00:16:21.981 5.920 - 5.947: 99.5264% ( 1) 00:16:21.981 5.973 - 6.000: 99.5316% ( 1) 00:16:21.981 6.000 - 6.027: 99.5420% ( 2) 00:16:21.981 6.027 - 6.053: 99.5472% ( 1) 00:16:21.981 6.107 - 6.133: 99.5524% ( 1) 00:16:21.981 6.133 - 6.160: 99.5576% ( 1) 00:16:21.981 6.187 - 6.213: 99.5628% ( 1) 00:16:21.981 6.320 - 6.347: 99.5680% ( 1) 00:16:21.981 6.373 - 6.400: 99.5733% ( 1) 00:16:21.981 6.427 - 6.453: 99.5837% ( 2) 00:16:21.981 6.453 - 6.480: 99.5941% ( 2) 00:16:21.981 6.773 - 6.800: 99.5993% ( 1) 00:16:21.981 10.293 - 10.347: 99.6045% ( 1) 00:16:21.981 11.307 - 11.360: 99.6097% ( 1) 00:16:21.981 12.000 - 12.053: 99.6149% ( 1) 00:16:21.981 13.280 - 13.333: 99.6201% ( 1) 00:16:21.981 3986.773 - 4014.080: 99.9948% ( 72) 00:16:21.981 7973.547 - 8028.160: 100.0000% ( 1) 00:16:21.981 00:16:21.981 12:10:34 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:21.981 12:10:34 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:21.981 12:10:34 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:21.981 12:10:34 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:21.981 12:10:34 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:22.241 [ 00:16:22.241 { 00:16:22.241 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:22.241 "subtype": "Discovery", 00:16:22.241 "listen_addresses": [], 00:16:22.241 "allow_any_host": true, 00:16:22.241 "hosts": [] 00:16:22.241 }, 00:16:22.241 { 00:16:22.241 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:22.241 "subtype": "NVMe", 00:16:22.241 "listen_addresses": [ 00:16:22.241 { 00:16:22.241 "transport": "VFIOUSER", 00:16:22.241 "trtype": "VFIOUSER", 00:16:22.241 "adrfam": "IPv4", 00:16:22.241 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:22.241 "trsvcid": "0" 00:16:22.241 } 00:16:22.241 ], 00:16:22.241 "allow_any_host": true, 00:16:22.241 "hosts": [], 00:16:22.241 "serial_number": "SPDK1", 00:16:22.241 "model_number": "SPDK bdev Controller", 00:16:22.241 "max_namespaces": 32, 00:16:22.241 "min_cntlid": 1, 00:16:22.241 "max_cntlid": 65519, 00:16:22.241 "namespaces": [ 00:16:22.241 { 00:16:22.241 "nsid": 1, 00:16:22.241 "bdev_name": "Malloc1", 00:16:22.241 "name": "Malloc1", 00:16:22.241 "nguid": "2771E1AACCF84AF4B8205C51A8A00271", 00:16:22.241 "uuid": "2771e1aa-ccf8-4af4-b820-5c51a8a00271" 00:16:22.241 }, 00:16:22.241 { 00:16:22.241 "nsid": 2, 00:16:22.241 "bdev_name": "Malloc3", 00:16:22.242 "name": "Malloc3", 00:16:22.242 "nguid": "7CF1D629AFCC43008151AF2DBCE0E7CB", 00:16:22.242 "uuid": "7cf1d629-afcc-4300-8151-af2dbce0e7cb" 00:16:22.242 } 00:16:22.242 ] 00:16:22.242 }, 00:16:22.242 { 00:16:22.242 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:22.242 "subtype": "NVMe", 00:16:22.242 "listen_addresses": [ 00:16:22.242 { 00:16:22.242 "transport": "VFIOUSER", 00:16:22.242 "trtype": "VFIOUSER", 00:16:22.242 "adrfam": "IPv4", 00:16:22.242 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:22.242 "trsvcid": "0" 00:16:22.242 } 00:16:22.242 ], 00:16:22.242 "allow_any_host": true, 00:16:22.242 "hosts": [], 00:16:22.242 "serial_number": "SPDK2", 00:16:22.242 "model_number": "SPDK bdev Controller", 00:16:22.242 "max_namespaces": 32, 00:16:22.242 "min_cntlid": 1, 00:16:22.242 "max_cntlid": 65519, 00:16:22.242 "namespaces": [ 00:16:22.242 { 00:16:22.242 "nsid": 1, 00:16:22.242 "bdev_name": "Malloc2", 00:16:22.242 "name": "Malloc2", 00:16:22.242 "nguid": "995766A33C6848F09E6EF2108F387571", 00:16:22.242 "uuid": "995766a3-3c68-48f0-9e6e-f2108f387571" 00:16:22.242 } 00:16:22.242 ] 00:16:22.242 } 00:16:22.242 ] 00:16:22.242 12:10:35 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:22.242 12:10:35 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:22.242 12:10:35 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1433766 00:16:22.242 12:10:35 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:22.242 12:10:35 -- common/autotest_common.sh@1244 -- # local i=0 00:16:22.242 12:10:35 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:22.242 12:10:35 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:22.242 12:10:35 -- common/autotest_common.sh@1255 -- # return 0 00:16:22.242 12:10:35 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:22.242 12:10:35 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:22.242 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.242 Malloc4 00:16:22.242 12:10:35 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:22.502 12:10:35 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:22.502 Asynchronous Event Request test 00:16:22.502 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.502 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:22.502 Registering asynchronous event callbacks... 00:16:22.502 Starting namespace attribute notice tests for all controllers... 00:16:22.502 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:22.502 aer_cb - Changed Namespace 00:16:22.502 Cleaning up... 00:16:22.763 [ 00:16:22.763 { 00:16:22.763 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:22.763 "subtype": "Discovery", 00:16:22.763 "listen_addresses": [], 00:16:22.763 "allow_any_host": true, 00:16:22.763 "hosts": [] 00:16:22.763 }, 00:16:22.763 { 00:16:22.763 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:22.763 "subtype": "NVMe", 00:16:22.763 "listen_addresses": [ 00:16:22.763 { 00:16:22.763 "transport": "VFIOUSER", 00:16:22.763 "trtype": "VFIOUSER", 00:16:22.763 "adrfam": "IPv4", 00:16:22.763 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:22.763 "trsvcid": "0" 00:16:22.763 } 00:16:22.763 ], 00:16:22.763 "allow_any_host": true, 00:16:22.763 "hosts": [], 00:16:22.763 "serial_number": "SPDK1", 00:16:22.763 "model_number": "SPDK bdev Controller", 00:16:22.763 "max_namespaces": 32, 00:16:22.763 "min_cntlid": 1, 00:16:22.763 "max_cntlid": 65519, 00:16:22.763 "namespaces": [ 00:16:22.763 { 00:16:22.763 "nsid": 1, 00:16:22.763 "bdev_name": "Malloc1", 00:16:22.763 "name": "Malloc1", 00:16:22.763 "nguid": "2771E1AACCF84AF4B8205C51A8A00271", 00:16:22.763 "uuid": "2771e1aa-ccf8-4af4-b820-5c51a8a00271" 00:16:22.763 }, 00:16:22.763 { 00:16:22.763 "nsid": 2, 00:16:22.763 "bdev_name": "Malloc3", 00:16:22.763 "name": "Malloc3", 00:16:22.763 "nguid": "7CF1D629AFCC43008151AF2DBCE0E7CB", 00:16:22.763 "uuid": "7cf1d629-afcc-4300-8151-af2dbce0e7cb" 00:16:22.763 } 00:16:22.763 ] 00:16:22.763 }, 00:16:22.763 { 00:16:22.763 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:22.763 "subtype": "NVMe", 00:16:22.763 "listen_addresses": [ 00:16:22.763 { 00:16:22.763 "transport": "VFIOUSER", 00:16:22.763 "trtype": "VFIOUSER", 00:16:22.763 "adrfam": "IPv4", 00:16:22.763 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:22.763 "trsvcid": "0" 00:16:22.763 } 00:16:22.763 ], 00:16:22.763 "allow_any_host": true, 00:16:22.763 "hosts": [], 00:16:22.763 "serial_number": "SPDK2", 00:16:22.763 "model_number": "SPDK bdev Controller", 00:16:22.763 "max_namespaces": 32, 00:16:22.763 "min_cntlid": 1, 00:16:22.763 "max_cntlid": 65519, 00:16:22.763 "namespaces": [ 00:16:22.763 { 00:16:22.763 "nsid": 1, 00:16:22.763 "bdev_name": "Malloc2", 00:16:22.763 "name": "Malloc2", 00:16:22.763 "nguid": "995766A33C6848F09E6EF2108F387571", 00:16:22.763 "uuid": "995766a3-3c68-48f0-9e6e-f2108f387571" 00:16:22.763 }, 00:16:22.763 { 00:16:22.763 "nsid": 2, 00:16:22.763 "bdev_name": "Malloc4", 00:16:22.763 "name": "Malloc4", 00:16:22.763 "nguid": "77F9B63051F944228D2175D92F5CA946", 00:16:22.763 "uuid": "77f9b630-51f9-4422-8d21-75d92f5ca946" 00:16:22.763 } 00:16:22.763 ] 00:16:22.763 } 00:16:22.763 ] 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@44 -- # wait 1433766 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1424679 00:16:22.763 12:10:35 -- common/autotest_common.sh@926 -- # '[' -z 1424679 ']' 00:16:22.763 12:10:35 -- common/autotest_common.sh@930 -- # kill -0 1424679 00:16:22.763 12:10:35 -- common/autotest_common.sh@931 -- # uname 00:16:22.763 12:10:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.763 12:10:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1424679 00:16:22.763 12:10:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:22.763 12:10:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:22.763 12:10:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1424679' 00:16:22.763 killing process with pid 1424679 00:16:22.763 12:10:35 -- common/autotest_common.sh@945 -- # kill 1424679 00:16:22.763 [2024-06-11 12:10:35.616491] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:22.763 12:10:35 -- common/autotest_common.sh@950 -- # wait 1424679 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1433910 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1433910' 00:16:22.763 Process pid: 1433910 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:22.763 12:10:35 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1433910 00:16:22.763 12:10:35 -- common/autotest_common.sh@819 -- # '[' -z 1433910 ']' 00:16:22.763 12:10:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.763 12:10:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:22.763 12:10:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.763 12:10:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:22.763 12:10:35 -- common/autotest_common.sh@10 -- # set +x 00:16:23.024 [2024-06-11 12:10:35.827582] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:23.024 [2024-06-11 12:10:35.828501] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:23.024 [2024-06-11 12:10:35.828539] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.024 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.024 [2024-06-11 12:10:35.889119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.024 [2024-06-11 12:10:35.917521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.024 [2024-06-11 12:10:35.917657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.024 [2024-06-11 12:10:35.917668] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.024 [2024-06-11 12:10:35.917676] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.024 [2024-06-11 12:10:35.917816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.024 [2024-06-11 12:10:35.917931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.024 [2024-06-11 12:10:35.918086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.024 [2024-06-11 12:10:35.918086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.024 [2024-06-11 12:10:35.981333] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:23.024 [2024-06-11 12:10:35.981334] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:23.024 [2024-06-11 12:10:35.981610] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:23.024 [2024-06-11 12:10:35.981808] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:23.024 [2024-06-11 12:10:35.981895] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:23.594 12:10:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:23.594 12:10:36 -- common/autotest_common.sh@852 -- # return 0 00:16:23.594 12:10:36 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:24.977 12:10:37 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:24.977 12:10:37 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:24.977 12:10:37 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:24.977 12:10:37 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:24.977 12:10:37 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:24.977 12:10:37 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:24.977 Malloc1 00:16:24.977 12:10:37 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:25.237 12:10:38 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:25.497 12:10:38 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:25.497 12:10:38 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:25.497 12:10:38 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:25.497 12:10:38 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:25.757 Malloc2 00:16:25.757 12:10:38 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:26.017 12:10:38 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:26.017 12:10:38 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:26.277 12:10:39 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:26.277 12:10:39 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1433910 00:16:26.277 12:10:39 -- common/autotest_common.sh@926 -- # '[' -z 1433910 ']' 00:16:26.277 12:10:39 -- common/autotest_common.sh@930 -- # kill -0 1433910 00:16:26.277 12:10:39 -- common/autotest_common.sh@931 -- # uname 00:16:26.277 12:10:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:26.277 12:10:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1433910 00:16:26.277 12:10:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:26.277 12:10:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:26.277 12:10:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1433910' 00:16:26.277 killing process with pid 1433910 00:16:26.277 12:10:39 -- common/autotest_common.sh@945 -- # kill 1433910 00:16:26.277 12:10:39 -- common/autotest_common.sh@950 -- # wait 1433910 00:16:26.277 12:10:39 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:26.277 12:10:39 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:26.277 00:16:26.277 real 0m50.229s 00:16:26.277 user 3m19.239s 00:16:26.277 sys 0m2.877s 00:16:26.277 12:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.277 12:10:39 -- common/autotest_common.sh@10 -- # set +x 00:16:26.277 ************************************ 00:16:26.277 END TEST nvmf_vfio_user 00:16:26.277 ************************************ 00:16:26.600 12:10:39 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:26.600 12:10:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:26.600 12:10:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:26.600 12:10:39 -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 ************************************ 00:16:26.600 START TEST nvmf_vfio_user_nvme_compliance 00:16:26.600 ************************************ 00:16:26.600 12:10:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:26.600 * Looking for test storage... 00:16:26.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:26.600 12:10:39 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.600 12:10:39 -- nvmf/common.sh@7 -- # uname -s 00:16:26.600 12:10:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.600 12:10:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.600 12:10:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.600 12:10:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.600 12:10:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.600 12:10:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.600 12:10:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.600 12:10:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.600 12:10:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.600 12:10:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.600 12:10:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.600 12:10:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.600 12:10:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.600 12:10:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.600 12:10:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.600 12:10:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.600 12:10:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.600 12:10:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.600 12:10:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.600 12:10:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.600 12:10:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.600 12:10:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.600 12:10:39 -- paths/export.sh@5 -- # export PATH 00:16:26.600 12:10:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.600 12:10:39 -- nvmf/common.sh@46 -- # : 0 00:16:26.600 12:10:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:26.600 12:10:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:26.600 12:10:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:26.600 12:10:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.600 12:10:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.600 12:10:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:26.600 12:10:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:26.600 12:10:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:26.600 12:10:39 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.600 12:10:39 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.600 12:10:39 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:26.600 12:10:39 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:26.600 12:10:39 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:26.600 12:10:39 -- compliance/compliance.sh@20 -- # nvmfpid=1434781 00:16:26.600 12:10:39 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1434781' 00:16:26.600 Process pid: 1434781 00:16:26.600 12:10:39 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:26.600 12:10:39 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:26.600 12:10:39 -- compliance/compliance.sh@24 -- # waitforlisten 1434781 00:16:26.600 12:10:39 -- common/autotest_common.sh@819 -- # '[' -z 1434781 ']' 00:16:26.600 12:10:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.600 12:10:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:26.600 12:10:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.600 12:10:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:26.600 12:10:39 -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 [2024-06-11 12:10:39.517106] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:26.600 [2024-06-11 12:10:39.517174] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.600 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.600 [2024-06-11 12:10:39.573467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:26.600 [2024-06-11 12:10:39.603683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:26.600 [2024-06-11 12:10:39.603806] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.600 [2024-06-11 12:10:39.603815] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.600 [2024-06-11 12:10:39.603822] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.600 [2024-06-11 12:10:39.603967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.600 [2024-06-11 12:10:39.604102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.867 [2024-06-11 12:10:39.604275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.439 12:10:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.439 12:10:40 -- common/autotest_common.sh@852 -- # return 0 00:16:27.439 12:10:40 -- compliance/compliance.sh@26 -- # sleep 1 00:16:28.381 12:10:41 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:28.381 12:10:41 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:28.381 12:10:41 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:28.381 12:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.381 12:10:41 -- common/autotest_common.sh@10 -- # set +x 00:16:28.381 12:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.381 12:10:41 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:28.381 12:10:41 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:28.381 12:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.381 12:10:41 -- common/autotest_common.sh@10 -- # set +x 00:16:28.381 malloc0 00:16:28.381 12:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.381 12:10:41 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:28.381 12:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.381 12:10:41 -- common/autotest_common.sh@10 -- # set +x 00:16:28.381 12:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.381 12:10:41 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:28.381 12:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.381 12:10:41 -- common/autotest_common.sh@10 -- # set +x 00:16:28.381 12:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.381 12:10:41 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:28.381 12:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.381 12:10:41 -- common/autotest_common.sh@10 -- # set +x 00:16:28.381 12:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.381 12:10:41 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:28.642 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.642 00:16:28.642 00:16:28.642 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.642 http://cunit.sourceforge.net/ 00:16:28.642 00:16:28.642 00:16:28.642 Suite: nvme_compliance 00:16:28.642 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-11 12:10:41.548803] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:28.642 [2024-06-11 12:10:41.548829] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:28.642 [2024-06-11 12:10:41.548834] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:28.642 passed 00:16:28.902 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:28.902 Test: admin_identify_ns ...[2024-06-11 12:10:41.808034] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:28.902 [2024-06-11 12:10:41.816030] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:28.902 passed 00:16:29.162 Test: admin_get_features_mandatory_features ...passed 00:16:29.162 Test: admin_get_features_optional_features ...passed 00:16:29.423 Test: admin_set_features_number_of_queues ...passed 00:16:29.423 Test: admin_get_log_page_mandatory_logs ...passed 00:16:29.684 Test: admin_get_log_page_with_lpo ...[2024-06-11 12:10:42.483034] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:29.684 passed 00:16:29.684 Test: fabric_property_get ...passed 00:16:29.684 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-11 12:10:42.689041] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:29.945 passed 00:16:29.945 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-11 12:10:42.871035] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.945 [2024-06-11 12:10:42.887024] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.945 passed 00:16:30.206 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-11 12:10:42.984025] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:30.206 passed 00:16:30.206 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-11 12:10:43.155029] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:30.206 [2024-06-11 12:10:43.179027] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:30.206 passed 00:16:30.466 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-11 12:10:43.276972] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:30.466 [2024-06-11 12:10:43.277000] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:30.466 passed 00:16:30.466 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-11 12:10:43.463027] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:30.466 [2024-06-11 12:10:43.471026] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:30.466 [2024-06-11 12:10:43.479022] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:30.466 [2024-06-11 12:10:43.487027] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:30.727 passed 00:16:30.727 Test: admin_create_io_sq_verify_pc ...[2024-06-11 12:10:43.625036] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:30.727 passed 00:16:32.110 Test: admin_create_io_qp_max_qps ...[2024-06-11 12:10:44.824029] nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:32.370 passed 00:16:32.630 Test: admin_create_io_sq_shared_cq ...[2024-06-11 12:10:45.429023] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:32.630 passed 00:16:32.630 00:16:32.630 Run Summary: Type Total Ran Passed Failed Inactive 00:16:32.630 suites 1 1 n/a 0 0 00:16:32.630 tests 18 18 18 0 0 00:16:32.630 asserts 360 360 360 0 n/a 00:16:32.630 00:16:32.630 Elapsed time = 1.639 seconds 00:16:32.630 12:10:45 -- compliance/compliance.sh@42 -- # killprocess 1434781 00:16:32.630 12:10:45 -- common/autotest_common.sh@926 -- # '[' -z 1434781 ']' 00:16:32.630 12:10:45 -- common/autotest_common.sh@930 -- # kill -0 1434781 00:16:32.630 12:10:45 -- common/autotest_common.sh@931 -- # uname 00:16:32.630 12:10:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:32.630 12:10:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1434781 00:16:32.630 12:10:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:32.630 12:10:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:32.630 12:10:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1434781' 00:16:32.630 killing process with pid 1434781 00:16:32.630 12:10:45 -- common/autotest_common.sh@945 -- # kill 1434781 00:16:32.630 12:10:45 -- common/autotest_common.sh@950 -- # wait 1434781 00:16:32.891 12:10:45 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:32.891 12:10:45 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:32.891 00:16:32.891 real 0m6.360s 00:16:32.891 user 0m18.346s 00:16:32.891 sys 0m0.448s 00:16:32.891 12:10:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.891 12:10:45 -- common/autotest_common.sh@10 -- # set +x 00:16:32.891 ************************************ 00:16:32.891 END TEST nvmf_vfio_user_nvme_compliance 00:16:32.891 ************************************ 00:16:32.891 12:10:45 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:32.891 12:10:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:32.891 12:10:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:32.891 12:10:45 -- common/autotest_common.sh@10 -- # set +x 00:16:32.891 ************************************ 00:16:32.891 START TEST nvmf_vfio_user_fuzz 00:16:32.891 ************************************ 00:16:32.891 12:10:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:32.891 * Looking for test storage... 00:16:32.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.891 12:10:45 -- nvmf/common.sh@7 -- # uname -s 00:16:32.891 12:10:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.891 12:10:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.891 12:10:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.891 12:10:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.891 12:10:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.891 12:10:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.891 12:10:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.891 12:10:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.891 12:10:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.891 12:10:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.891 12:10:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:32.891 12:10:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:32.891 12:10:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.891 12:10:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.891 12:10:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.891 12:10:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.891 12:10:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.891 12:10:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.891 12:10:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.891 12:10:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.891 12:10:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.891 12:10:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.891 12:10:45 -- paths/export.sh@5 -- # export PATH 00:16:32.891 12:10:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.891 12:10:45 -- nvmf/common.sh@46 -- # : 0 00:16:32.891 12:10:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:32.891 12:10:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:32.891 12:10:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:32.891 12:10:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.891 12:10:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.891 12:10:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:32.891 12:10:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:32.891 12:10:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1435967 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1435967' 00:16:32.891 Process pid: 1435967 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:32.891 12:10:45 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1435967 00:16:32.891 12:10:45 -- common/autotest_common.sh@819 -- # '[' -z 1435967 ']' 00:16:32.891 12:10:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.891 12:10:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.891 12:10:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.891 12:10:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.891 12:10:45 -- common/autotest_common.sh@10 -- # set +x 00:16:33.831 12:10:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:33.831 12:10:46 -- common/autotest_common.sh@852 -- # return 0 00:16:33.831 12:10:46 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:34.772 12:10:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.772 12:10:47 -- common/autotest_common.sh@10 -- # set +x 00:16:34.772 12:10:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:34.772 12:10:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.772 12:10:47 -- common/autotest_common.sh@10 -- # set +x 00:16:34.772 malloc0 00:16:34.772 12:10:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:34.772 12:10:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.772 12:10:47 -- common/autotest_common.sh@10 -- # set +x 00:16:34.772 12:10:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:34.772 12:10:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.772 12:10:47 -- common/autotest_common.sh@10 -- # set +x 00:16:34.772 12:10:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:34.772 12:10:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.772 12:10:47 -- common/autotest_common.sh@10 -- # set +x 00:16:34.772 12:10:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:34.772 12:10:47 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:06.887 Fuzzing completed. Shutting down the fuzz application 00:17:06.887 00:17:06.887 Dumping successful admin opcodes: 00:17:06.887 8, 9, 10, 24, 00:17:06.887 Dumping successful io opcodes: 00:17:06.887 0, 00:17:06.887 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1247307, total successful commands: 4896, random_seed: 3554211584 00:17:06.887 NS: 0x200003a1ef00 admin qp, Total commands completed: 184828, total successful commands: 1486, random_seed: 1343393600 00:17:06.887 12:11:18 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:06.887 12:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.887 12:11:18 -- common/autotest_common.sh@10 -- # set +x 00:17:06.887 12:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.887 12:11:18 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1435967 00:17:06.887 12:11:18 -- common/autotest_common.sh@926 -- # '[' -z 1435967 ']' 00:17:06.887 12:11:18 -- common/autotest_common.sh@930 -- # kill -0 1435967 00:17:06.887 12:11:18 -- common/autotest_common.sh@931 -- # uname 00:17:06.887 12:11:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.887 12:11:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1435967 00:17:06.887 12:11:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:06.887 12:11:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:06.887 12:11:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1435967' 00:17:06.887 killing process with pid 1435967 00:17:06.887 12:11:18 -- common/autotest_common.sh@945 -- # kill 1435967 00:17:06.887 12:11:18 -- common/autotest_common.sh@950 -- # wait 1435967 00:17:06.887 12:11:18 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:06.887 12:11:18 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:06.887 00:17:06.887 real 0m32.624s 00:17:06.887 user 0m35.574s 00:17:06.887 sys 0m25.458s 00:17:06.887 12:11:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.887 12:11:18 -- common/autotest_common.sh@10 -- # set +x 00:17:06.888 ************************************ 00:17:06.888 END TEST nvmf_vfio_user_fuzz 00:17:06.888 ************************************ 00:17:06.888 12:11:18 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:06.888 12:11:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:06.888 12:11:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:06.888 12:11:18 -- common/autotest_common.sh@10 -- # set +x 00:17:06.888 ************************************ 00:17:06.888 START TEST nvmf_host_management 00:17:06.888 ************************************ 00:17:06.888 12:11:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:06.888 * Looking for test storage... 00:17:06.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.888 12:11:18 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.888 12:11:18 -- nvmf/common.sh@7 -- # uname -s 00:17:06.888 12:11:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.888 12:11:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.888 12:11:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.888 12:11:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.888 12:11:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.888 12:11:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.888 12:11:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.888 12:11:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.888 12:11:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.888 12:11:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.888 12:11:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.888 12:11:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.888 12:11:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.888 12:11:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.888 12:11:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.888 12:11:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.888 12:11:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.888 12:11:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.888 12:11:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.888 12:11:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.888 12:11:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.888 12:11:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.888 12:11:18 -- paths/export.sh@5 -- # export PATH 00:17:06.888 12:11:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.888 12:11:18 -- nvmf/common.sh@46 -- # : 0 00:17:06.888 12:11:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:06.888 12:11:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:06.888 12:11:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:06.888 12:11:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.888 12:11:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.888 12:11:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:06.888 12:11:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:06.888 12:11:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:06.888 12:11:18 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.888 12:11:18 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.888 12:11:18 -- target/host_management.sh@104 -- # nvmftestinit 00:17:06.888 12:11:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:06.888 12:11:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.888 12:11:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:06.888 12:11:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:06.888 12:11:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:06.888 12:11:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.888 12:11:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.888 12:11:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.888 12:11:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:06.888 12:11:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:06.888 12:11:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:06.888 12:11:18 -- common/autotest_common.sh@10 -- # set +x 00:17:13.478 12:11:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:13.478 12:11:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:13.478 12:11:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:13.478 12:11:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:13.478 12:11:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:13.478 12:11:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:13.478 12:11:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:13.478 12:11:25 -- nvmf/common.sh@294 -- # net_devs=() 00:17:13.478 12:11:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:13.478 12:11:25 -- nvmf/common.sh@295 -- # e810=() 00:17:13.478 12:11:25 -- nvmf/common.sh@295 -- # local -ga e810 00:17:13.478 12:11:25 -- nvmf/common.sh@296 -- # x722=() 00:17:13.478 12:11:25 -- nvmf/common.sh@296 -- # local -ga x722 00:17:13.478 12:11:25 -- nvmf/common.sh@297 -- # mlx=() 00:17:13.478 12:11:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:13.478 12:11:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.478 12:11:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.478 12:11:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.478 12:11:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.478 12:11:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.478 12:11:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.478 12:11:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.478 12:11:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.479 12:11:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.479 12:11:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.479 12:11:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.479 12:11:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:13.479 12:11:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:13.479 12:11:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:13.479 12:11:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:13.479 12:11:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:13.479 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:13.479 12:11:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:13.479 12:11:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:13.479 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:13.479 12:11:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:13.479 12:11:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:13.479 12:11:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.479 12:11:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:13.479 12:11:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.479 12:11:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:13.479 Found net devices under 0000:31:00.0: cvl_0_0 00:17:13.479 12:11:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.479 12:11:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:13.479 12:11:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.479 12:11:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:13.479 12:11:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.479 12:11:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:13.479 Found net devices under 0000:31:00.1: cvl_0_1 00:17:13.479 12:11:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.479 12:11:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:13.479 12:11:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:13.479 12:11:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:13.479 12:11:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.479 12:11:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.479 12:11:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.479 12:11:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:13.479 12:11:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.479 12:11:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.479 12:11:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:13.479 12:11:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.479 12:11:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.479 12:11:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:13.479 12:11:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:13.479 12:11:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.479 12:11:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.479 12:11:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.479 12:11:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.479 12:11:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:13.479 12:11:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.479 12:11:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.479 12:11:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.479 12:11:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:13.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:17:13.479 00:17:13.479 --- 10.0.0.2 ping statistics --- 00:17:13.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.479 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:17:13.479 12:11:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:17:13.479 00:17:13.479 --- 10.0.0.1 ping statistics --- 00:17:13.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.479 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:17:13.479 12:11:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.479 12:11:25 -- nvmf/common.sh@410 -- # return 0 00:17:13.479 12:11:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:13.479 12:11:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.479 12:11:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:13.479 12:11:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.479 12:11:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:13.479 12:11:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:13.479 12:11:25 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:13.479 12:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:13.479 12:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:13.479 12:11:25 -- common/autotest_common.sh@10 -- # set +x 00:17:13.479 ************************************ 00:17:13.479 START TEST nvmf_host_management 00:17:13.479 ************************************ 00:17:13.479 12:11:25 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:13.479 12:11:25 -- target/host_management.sh@69 -- # starttarget 00:17:13.479 12:11:25 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:13.479 12:11:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:13.479 12:11:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:13.479 12:11:25 -- common/autotest_common.sh@10 -- # set +x 00:17:13.479 12:11:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:13.479 12:11:25 -- nvmf/common.sh@469 -- # nvmfpid=1446174 00:17:13.479 12:11:25 -- nvmf/common.sh@470 -- # waitforlisten 1446174 00:17:13.479 12:11:25 -- common/autotest_common.sh@819 -- # '[' -z 1446174 ']' 00:17:13.479 12:11:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.479 12:11:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.479 12:11:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.479 12:11:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.479 12:11:25 -- common/autotest_common.sh@10 -- # set +x 00:17:13.479 [2024-06-11 12:11:25.852840] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:13.479 [2024-06-11 12:11:25.852889] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.479 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.479 [2024-06-11 12:11:25.936685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:13.479 [2024-06-11 12:11:25.976617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:13.479 [2024-06-11 12:11:25.976772] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.479 [2024-06-11 12:11:25.976785] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.479 [2024-06-11 12:11:25.976795] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.479 [2024-06-11 12:11:25.976913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.479 [2024-06-11 12:11:25.977074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.479 [2024-06-11 12:11:25.977209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.479 [2024-06-11 12:11:25.977210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:13.740 12:11:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.740 12:11:26 -- common/autotest_common.sh@852 -- # return 0 00:17:13.740 12:11:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:13.740 12:11:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:13.740 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:17:13.740 12:11:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.740 12:11:26 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.740 12:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:13.740 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:17:13.740 [2024-06-11 12:11:26.671239] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.740 12:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:13.740 12:11:26 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:13.740 12:11:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:13.740 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:17:13.740 12:11:26 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:13.740 12:11:26 -- target/host_management.sh@23 -- # cat 00:17:13.740 12:11:26 -- target/host_management.sh@30 -- # rpc_cmd 00:17:13.740 12:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:13.740 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:17:13.740 Malloc0 00:17:13.740 [2024-06-11 12:11:26.730518] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.740 12:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:13.740 12:11:26 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:13.740 12:11:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:13.740 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.002 12:11:26 -- target/host_management.sh@73 -- # perfpid=1446447 00:17:14.002 12:11:26 -- target/host_management.sh@74 -- # waitforlisten 1446447 /var/tmp/bdevperf.sock 00:17:14.002 12:11:26 -- common/autotest_common.sh@819 -- # '[' -z 1446447 ']' 00:17:14.002 12:11:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.002 12:11:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:14.002 12:11:26 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:14.002 12:11:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.002 12:11:26 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:14.002 12:11:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:14.002 12:11:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.002 12:11:26 -- nvmf/common.sh@520 -- # config=() 00:17:14.002 12:11:26 -- nvmf/common.sh@520 -- # local subsystem config 00:17:14.002 12:11:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:14.002 12:11:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:14.002 { 00:17:14.002 "params": { 00:17:14.002 "name": "Nvme$subsystem", 00:17:14.002 "trtype": "$TEST_TRANSPORT", 00:17:14.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:14.002 "adrfam": "ipv4", 00:17:14.002 "trsvcid": "$NVMF_PORT", 00:17:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:14.002 "hdgst": ${hdgst:-false}, 00:17:14.002 "ddgst": ${ddgst:-false} 00:17:14.002 }, 00:17:14.002 "method": "bdev_nvme_attach_controller" 00:17:14.002 } 00:17:14.002 EOF 00:17:14.002 )") 00:17:14.002 12:11:26 -- nvmf/common.sh@542 -- # cat 00:17:14.002 12:11:26 -- nvmf/common.sh@544 -- # jq . 00:17:14.002 12:11:26 -- nvmf/common.sh@545 -- # IFS=, 00:17:14.002 12:11:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:14.002 "params": { 00:17:14.002 "name": "Nvme0", 00:17:14.002 "trtype": "tcp", 00:17:14.002 "traddr": "10.0.0.2", 00:17:14.002 "adrfam": "ipv4", 00:17:14.002 "trsvcid": "4420", 00:17:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:14.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:14.002 "hdgst": false, 00:17:14.002 "ddgst": false 00:17:14.002 }, 00:17:14.002 "method": "bdev_nvme_attach_controller" 00:17:14.002 }' 00:17:14.002 [2024-06-11 12:11:26.826548] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:14.002 [2024-06-11 12:11:26.826597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446447 ] 00:17:14.002 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.002 [2024-06-11 12:11:26.885984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.002 [2024-06-11 12:11:26.914904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.263 Running I/O for 10 seconds... 00:17:14.836 12:11:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:14.836 12:11:27 -- common/autotest_common.sh@852 -- # return 0 00:17:14.836 12:11:27 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:14.836 12:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:14.836 12:11:27 -- common/autotest_common.sh@10 -- # set +x 00:17:14.836 12:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.836 12:11:27 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.836 12:11:27 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:14.836 12:11:27 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:14.836 12:11:27 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:14.836 12:11:27 -- target/host_management.sh@52 -- # local ret=1 00:17:14.836 12:11:27 -- target/host_management.sh@53 -- # local i 00:17:14.836 12:11:27 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:14.836 12:11:27 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:14.836 12:11:27 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:14.836 12:11:27 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:14.836 12:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:14.836 12:11:27 -- common/autotest_common.sh@10 -- # set +x 00:17:14.836 12:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.836 12:11:27 -- target/host_management.sh@55 -- # read_io_count=2196 00:17:14.836 12:11:27 -- target/host_management.sh@58 -- # '[' 2196 -ge 100 ']' 00:17:14.836 12:11:27 -- target/host_management.sh@59 -- # ret=0 00:17:14.836 12:11:27 -- target/host_management.sh@60 -- # break 00:17:14.836 12:11:27 -- target/host_management.sh@64 -- # return 0 00:17:14.836 12:11:27 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:14.836 12:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:14.837 12:11:27 -- common/autotest_common.sh@10 -- # set +x 00:17:14.837 [2024-06-11 12:11:27.661534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-06-11 12:11:27.661765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with id:0 cdw10:00000000 cdw11:00000000 00:17:14.837 the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-11 12:11:27.661791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.837 [2024-06-11 12:11:27.661808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.661815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-06-11 12:11:27.661822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with id:0 cdw10:00000000 cdw11:00000000 00:17:14.837 the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-11 12:11:27.661830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with [2024-06-11 12:11:27.661843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:17:14.837 id:0 cdw10:00000000 cdw11:00000000 00:17:14.837 [2024-06-11 12:11:27.661851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.661859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a434a0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.661919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ebe0 is same with the state(5) to be set 00:17:14.837 [2024-06-11 12:11:27.662233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.837 [2024-06-11 12:11:27.662255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.662270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.837 [2024-06-11 12:11:27.662281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.662291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.837 [2024-06-11 12:11:27.662299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.662308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.837 [2024-06-11 12:11:27.662316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.662325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.837 [2024-06-11 12:11:27.662333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.662343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.837 [2024-06-11 12:11:27.662350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.837 [2024-06-11 12:11:27.662360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.837 [2024-06-11 12:11:27.662367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.662982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.838 [2024-06-11 12:11:27.662990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.838 [2024-06-11 12:11:27.663001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.839 [2024-06-11 12:11:27.663397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.663450] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a40cd0 was disconnected and freed. reset controller. 00:17:14.839 [2024-06-11 12:11:27.664644] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:14.839 task offset: 45824 on job bdev=Nvme0n1 fails 00:17:14.839 00:17:14.839 Latency(us) 00:17:14.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.839 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.839 Job: Nvme0n1 ended in about 0.62 seconds with error 00:17:14.839 Verification LBA range: start 0x0 length 0x400 00:17:14.839 Nvme0n1 : 0.62 3853.84 240.87 103.59 0.00 15903.78 1549.65 21299.20 00:17:14.839 =================================================================================================================== 00:17:14.839 Total : 3853.84 240.87 103.59 0.00 15903.78 1549.65 21299.20 00:17:14.839 12:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.839 [2024-06-11 12:11:27.666599] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:14.839 [2024-06-11 12:11:27.666622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a434a0 (9): Bad file descriptor 00:17:14.839 12:11:27 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:14.839 12:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:14.839 12:11:27 -- common/autotest_common.sh@10 -- # set +x 00:17:14.839 [2024-06-11 12:11:27.673101] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:14.839 [2024-06-11 12:11:27.673191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:14.839 [2024-06-11 12:11:27.673212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.839 [2024-06-11 12:11:27.673225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:14.839 [2024-06-11 12:11:27.673232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:14.839 [2024-06-11 12:11:27.673239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:14.839 [2024-06-11 12:11:27.673246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a434a0 00:17:14.839 [2024-06-11 12:11:27.673264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a434a0 (9): Bad file descriptor 00:17:14.839 [2024-06-11 12:11:27.673277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:14.839 [2024-06-11 12:11:27.673284] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:14.839 [2024-06-11 12:11:27.673292] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:14.839 [2024-06-11 12:11:27.673304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.839 12:11:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.839 12:11:27 -- target/host_management.sh@87 -- # sleep 1 00:17:15.781 12:11:28 -- target/host_management.sh@91 -- # kill -9 1446447 00:17:15.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1446447) - No such process 00:17:15.781 12:11:28 -- target/host_management.sh@91 -- # true 00:17:15.781 12:11:28 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:15.781 12:11:28 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:15.781 12:11:28 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:15.781 12:11:28 -- nvmf/common.sh@520 -- # config=() 00:17:15.781 12:11:28 -- nvmf/common.sh@520 -- # local subsystem config 00:17:15.781 12:11:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:15.781 12:11:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:15.781 { 00:17:15.781 "params": { 00:17:15.781 "name": "Nvme$subsystem", 00:17:15.781 "trtype": "$TEST_TRANSPORT", 00:17:15.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.781 "adrfam": "ipv4", 00:17:15.781 "trsvcid": "$NVMF_PORT", 00:17:15.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.781 "hdgst": ${hdgst:-false}, 00:17:15.781 "ddgst": ${ddgst:-false} 00:17:15.781 }, 00:17:15.781 "method": "bdev_nvme_attach_controller" 00:17:15.781 } 00:17:15.781 EOF 00:17:15.781 )") 00:17:15.781 12:11:28 -- nvmf/common.sh@542 -- # cat 00:17:15.781 12:11:28 -- nvmf/common.sh@544 -- # jq . 00:17:15.781 12:11:28 -- nvmf/common.sh@545 -- # IFS=, 00:17:15.781 12:11:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:15.781 "params": { 00:17:15.781 "name": "Nvme0", 00:17:15.781 "trtype": "tcp", 00:17:15.781 "traddr": "10.0.0.2", 00:17:15.781 "adrfam": "ipv4", 00:17:15.781 "trsvcid": "4420", 00:17:15.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:15.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:15.781 "hdgst": false, 00:17:15.781 "ddgst": false 00:17:15.781 }, 00:17:15.781 "method": "bdev_nvme_attach_controller" 00:17:15.781 }' 00:17:15.781 [2024-06-11 12:11:28.733180] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:15.781 [2024-06-11 12:11:28.733233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446803 ] 00:17:15.781 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.781 [2024-06-11 12:11:28.792961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.043 [2024-06-11 12:11:28.820566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.304 Running I/O for 1 seconds... 00:17:17.246 00:17:17.246 Latency(us) 00:17:17.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.246 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:17.246 Verification LBA range: start 0x0 length 0x400 00:17:17.246 Nvme0n1 : 1.01 3441.35 215.08 0.00 0.00 18315.84 2621.44 23374.51 00:17:17.246 =================================================================================================================== 00:17:17.246 Total : 3441.35 215.08 0.00 0.00 18315.84 2621.44 23374.51 00:17:17.246 12:11:30 -- target/host_management.sh@101 -- # stoptarget 00:17:17.246 12:11:30 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:17.246 12:11:30 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:17.246 12:11:30 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:17.246 12:11:30 -- target/host_management.sh@40 -- # nvmftestfini 00:17:17.246 12:11:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:17.246 12:11:30 -- nvmf/common.sh@116 -- # sync 00:17:17.246 12:11:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:17.246 12:11:30 -- nvmf/common.sh@119 -- # set +e 00:17:17.246 12:11:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:17.246 12:11:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:17.246 rmmod nvme_tcp 00:17:17.508 rmmod nvme_fabrics 00:17:17.508 rmmod nvme_keyring 00:17:17.508 12:11:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:17.508 12:11:30 -- nvmf/common.sh@123 -- # set -e 00:17:17.508 12:11:30 -- nvmf/common.sh@124 -- # return 0 00:17:17.508 12:11:30 -- nvmf/common.sh@477 -- # '[' -n 1446174 ']' 00:17:17.508 12:11:30 -- nvmf/common.sh@478 -- # killprocess 1446174 00:17:17.508 12:11:30 -- common/autotest_common.sh@926 -- # '[' -z 1446174 ']' 00:17:17.508 12:11:30 -- common/autotest_common.sh@930 -- # kill -0 1446174 00:17:17.508 12:11:30 -- common/autotest_common.sh@931 -- # uname 00:17:17.508 12:11:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.508 12:11:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1446174 00:17:17.508 12:11:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:17.508 12:11:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:17.508 12:11:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1446174' 00:17:17.508 killing process with pid 1446174 00:17:17.508 12:11:30 -- common/autotest_common.sh@945 -- # kill 1446174 00:17:17.508 12:11:30 -- common/autotest_common.sh@950 -- # wait 1446174 00:17:17.508 [2024-06-11 12:11:30.495124] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:17.508 12:11:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:17.508 12:11:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:17.508 12:11:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:17.508 12:11:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.508 12:11:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:17.508 12:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.508 12:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.508 12:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.054 12:11:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:20.054 00:17:20.054 real 0m6.779s 00:17:20.054 user 0m20.486s 00:17:20.054 sys 0m1.105s 00:17:20.054 12:11:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.054 12:11:32 -- common/autotest_common.sh@10 -- # set +x 00:17:20.054 ************************************ 00:17:20.054 END TEST nvmf_host_management 00:17:20.054 ************************************ 00:17:20.054 12:11:32 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:20.054 00:17:20.054 real 0m14.196s 00:17:20.054 user 0m22.520s 00:17:20.054 sys 0m6.431s 00:17:20.055 12:11:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.055 12:11:32 -- common/autotest_common.sh@10 -- # set +x 00:17:20.055 ************************************ 00:17:20.055 END TEST nvmf_host_management 00:17:20.055 ************************************ 00:17:20.055 12:11:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:20.055 12:11:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:20.055 12:11:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:20.055 12:11:32 -- common/autotest_common.sh@10 -- # set +x 00:17:20.055 ************************************ 00:17:20.055 START TEST nvmf_lvol 00:17:20.055 ************************************ 00:17:20.055 12:11:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:20.055 * Looking for test storage... 00:17:20.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.055 12:11:32 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.055 12:11:32 -- nvmf/common.sh@7 -- # uname -s 00:17:20.055 12:11:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.055 12:11:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.055 12:11:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.055 12:11:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.055 12:11:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.055 12:11:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.055 12:11:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.055 12:11:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.055 12:11:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.055 12:11:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.055 12:11:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:20.055 12:11:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:20.055 12:11:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.055 12:11:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.055 12:11:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.055 12:11:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.055 12:11:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.055 12:11:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.055 12:11:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.055 12:11:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.055 12:11:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.055 12:11:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.055 12:11:32 -- paths/export.sh@5 -- # export PATH 00:17:20.055 12:11:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.055 12:11:32 -- nvmf/common.sh@46 -- # : 0 00:17:20.055 12:11:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:20.055 12:11:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:20.055 12:11:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:20.055 12:11:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.055 12:11:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.055 12:11:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:20.055 12:11:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:20.055 12:11:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:20.055 12:11:32 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.055 12:11:32 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.055 12:11:32 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:20.055 12:11:32 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:20.055 12:11:32 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.055 12:11:32 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:20.055 12:11:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:20.055 12:11:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.055 12:11:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:20.055 12:11:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:20.055 12:11:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:20.055 12:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.055 12:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.055 12:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.055 12:11:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:20.055 12:11:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:20.055 12:11:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:20.055 12:11:32 -- common/autotest_common.sh@10 -- # set +x 00:17:26.644 12:11:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:26.644 12:11:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:26.644 12:11:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:26.644 12:11:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:26.644 12:11:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:26.644 12:11:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:26.644 12:11:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:26.644 12:11:39 -- nvmf/common.sh@294 -- # net_devs=() 00:17:26.644 12:11:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:26.644 12:11:39 -- nvmf/common.sh@295 -- # e810=() 00:17:26.644 12:11:39 -- nvmf/common.sh@295 -- # local -ga e810 00:17:26.644 12:11:39 -- nvmf/common.sh@296 -- # x722=() 00:17:26.644 12:11:39 -- nvmf/common.sh@296 -- # local -ga x722 00:17:26.644 12:11:39 -- nvmf/common.sh@297 -- # mlx=() 00:17:26.644 12:11:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:26.644 12:11:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.644 12:11:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:26.644 12:11:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:26.644 12:11:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:26.644 12:11:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:26.644 12:11:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:26.644 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:26.644 12:11:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:26.644 12:11:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:26.644 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:26.644 12:11:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:26.644 12:11:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:26.644 12:11:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.644 12:11:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:26.644 12:11:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.644 12:11:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:26.644 Found net devices under 0000:31:00.0: cvl_0_0 00:17:26.644 12:11:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.644 12:11:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:26.644 12:11:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.644 12:11:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:26.644 12:11:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.644 12:11:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:26.644 Found net devices under 0000:31:00.1: cvl_0_1 00:17:26.644 12:11:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.644 12:11:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:26.644 12:11:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:26.644 12:11:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:26.644 12:11:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:26.644 12:11:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.644 12:11:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.644 12:11:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.644 12:11:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:26.644 12:11:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.644 12:11:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.644 12:11:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:26.644 12:11:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.644 12:11:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.644 12:11:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:26.644 12:11:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:26.644 12:11:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.906 12:11:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.906 12:11:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.906 12:11:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.906 12:11:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:26.906 12:11:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.166 12:11:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.166 12:11:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.166 12:11:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:27.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:17:27.166 00:17:27.166 --- 10.0.0.2 ping statistics --- 00:17:27.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.166 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:17:27.166 12:11:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:17:27.166 00:17:27.166 --- 10.0.0.1 ping statistics --- 00:17:27.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.166 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:17:27.166 12:11:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.166 12:11:39 -- nvmf/common.sh@410 -- # return 0 00:17:27.166 12:11:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:27.166 12:11:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.166 12:11:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:27.166 12:11:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:27.166 12:11:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.166 12:11:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:27.166 12:11:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:27.166 12:11:40 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:27.166 12:11:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:27.166 12:11:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:27.166 12:11:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.167 12:11:40 -- nvmf/common.sh@469 -- # nvmfpid=1451412 00:17:27.167 12:11:40 -- nvmf/common.sh@470 -- # waitforlisten 1451412 00:17:27.167 12:11:40 -- common/autotest_common.sh@819 -- # '[' -z 1451412 ']' 00:17:27.167 12:11:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:27.167 12:11:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.167 12:11:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.167 12:11:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.167 12:11:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.167 12:11:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.167 [2024-06-11 12:11:40.066047] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:27.167 [2024-06-11 12:11:40.066113] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.167 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.167 [2024-06-11 12:11:40.139824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:27.167 [2024-06-11 12:11:40.178352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:27.167 [2024-06-11 12:11:40.178506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.167 [2024-06-11 12:11:40.178517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.167 [2024-06-11 12:11:40.178524] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.167 [2024-06-11 12:11:40.178709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.167 [2024-06-11 12:11:40.178829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.167 [2024-06-11 12:11:40.178832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.143 12:11:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.143 12:11:40 -- common/autotest_common.sh@852 -- # return 0 00:17:28.143 12:11:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:28.143 12:11:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:28.143 12:11:40 -- common/autotest_common.sh@10 -- # set +x 00:17:28.143 12:11:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.143 12:11:40 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:28.143 [2024-06-11 12:11:41.019100] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.143 12:11:41 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:28.403 12:11:41 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:28.403 12:11:41 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:28.403 12:11:41 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:28.403 12:11:41 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:28.663 12:11:41 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:28.924 12:11:41 -- target/nvmf_lvol.sh@29 -- # lvs=b9c8bc9a-a3f7-4d0a-9ee3-7a5c62d5aede 00:17:28.924 12:11:41 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b9c8bc9a-a3f7-4d0a-9ee3-7a5c62d5aede lvol 20 00:17:28.924 12:11:41 -- target/nvmf_lvol.sh@32 -- # lvol=3776c991-62af-4dbd-97e3-23300e0296fe 00:17:28.924 12:11:41 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:29.184 12:11:42 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3776c991-62af-4dbd-97e3-23300e0296fe 00:17:29.184 12:11:42 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:29.444 [2024-06-11 12:11:42.311842] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.444 12:11:42 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:29.704 12:11:42 -- target/nvmf_lvol.sh@42 -- # perf_pid=1451941 00:17:29.704 12:11:42 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:29.704 12:11:42 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:29.704 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.644 12:11:43 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3776c991-62af-4dbd-97e3-23300e0296fe MY_SNAPSHOT 00:17:30.905 12:11:43 -- target/nvmf_lvol.sh@47 -- # snapshot=6803f00f-2a0a-4509-8c8f-599928a0312e 00:17:30.905 12:11:43 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3776c991-62af-4dbd-97e3-23300e0296fe 30 00:17:30.905 12:11:43 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6803f00f-2a0a-4509-8c8f-599928a0312e MY_CLONE 00:17:31.166 12:11:44 -- target/nvmf_lvol.sh@49 -- # clone=5e1a289c-c79a-48d8-a8b5-e8825e724ad1 00:17:31.166 12:11:44 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5e1a289c-c79a-48d8-a8b5-e8825e724ad1 00:17:31.426 12:11:44 -- target/nvmf_lvol.sh@53 -- # wait 1451941 00:17:41.422 Initializing NVMe Controllers 00:17:41.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:41.422 Controller IO queue size 128, less than required. 00:17:41.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:41.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:41.422 Initialization complete. Launching workers. 00:17:41.422 ======================================================== 00:17:41.422 Latency(us) 00:17:41.422 Device Information : IOPS MiB/s Average min max 00:17:41.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12401.69 48.44 10323.62 1414.43 50083.01 00:17:41.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16133.06 63.02 7935.05 2882.81 65405.13 00:17:41.422 ======================================================== 00:17:41.422 Total : 28534.76 111.46 8973.16 1414.43 65405.13 00:17:41.422 00:17:41.422 12:11:52 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:41.422 12:11:52 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3776c991-62af-4dbd-97e3-23300e0296fe 00:17:41.422 12:11:53 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9c8bc9a-a3f7-4d0a-9ee3-7a5c62d5aede 00:17:41.422 12:11:53 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:41.422 12:11:53 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:41.422 12:11:53 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:41.422 12:11:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:41.422 12:11:53 -- nvmf/common.sh@116 -- # sync 00:17:41.422 12:11:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:41.422 12:11:53 -- nvmf/common.sh@119 -- # set +e 00:17:41.422 12:11:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:41.422 12:11:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:41.422 rmmod nvme_tcp 00:17:41.422 rmmod nvme_fabrics 00:17:41.422 rmmod nvme_keyring 00:17:41.422 12:11:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:41.422 12:11:53 -- nvmf/common.sh@123 -- # set -e 00:17:41.422 12:11:53 -- nvmf/common.sh@124 -- # return 0 00:17:41.422 12:11:53 -- nvmf/common.sh@477 -- # '[' -n 1451412 ']' 00:17:41.422 12:11:53 -- nvmf/common.sh@478 -- # killprocess 1451412 00:17:41.422 12:11:53 -- common/autotest_common.sh@926 -- # '[' -z 1451412 ']' 00:17:41.422 12:11:53 -- common/autotest_common.sh@930 -- # kill -0 1451412 00:17:41.422 12:11:53 -- common/autotest_common.sh@931 -- # uname 00:17:41.422 12:11:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.422 12:11:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1451412 00:17:41.422 12:11:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:41.422 12:11:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:41.422 12:11:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1451412' 00:17:41.422 killing process with pid 1451412 00:17:41.422 12:11:53 -- common/autotest_common.sh@945 -- # kill 1451412 00:17:41.422 12:11:53 -- common/autotest_common.sh@950 -- # wait 1451412 00:17:41.422 12:11:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:41.422 12:11:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:41.422 12:11:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:41.422 12:11:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.422 12:11:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:41.422 12:11:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.422 12:11:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.422 12:11:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.805 12:11:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:42.805 00:17:42.805 real 0m22.951s 00:17:42.805 user 1m3.089s 00:17:42.805 sys 0m7.672s 00:17:42.805 12:11:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.805 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:17:42.805 ************************************ 00:17:42.805 END TEST nvmf_lvol 00:17:42.805 ************************************ 00:17:42.805 12:11:55 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:42.805 12:11:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:42.805 12:11:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:42.805 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:17:42.805 ************************************ 00:17:42.805 START TEST nvmf_lvs_grow 00:17:42.805 ************************************ 00:17:42.805 12:11:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:42.805 * Looking for test storage... 00:17:42.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.805 12:11:55 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.805 12:11:55 -- nvmf/common.sh@7 -- # uname -s 00:17:42.805 12:11:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.805 12:11:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.805 12:11:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.805 12:11:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.805 12:11:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.805 12:11:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.805 12:11:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.805 12:11:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.805 12:11:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.806 12:11:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.806 12:11:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.806 12:11:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.806 12:11:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.806 12:11:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.806 12:11:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.806 12:11:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.806 12:11:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.806 12:11:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.806 12:11:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.806 12:11:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.806 12:11:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.806 12:11:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.806 12:11:55 -- paths/export.sh@5 -- # export PATH 00:17:42.806 12:11:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.806 12:11:55 -- nvmf/common.sh@46 -- # : 0 00:17:42.806 12:11:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:42.806 12:11:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:42.806 12:11:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:42.806 12:11:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.806 12:11:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.806 12:11:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:42.806 12:11:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:42.806 12:11:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:42.806 12:11:55 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.806 12:11:55 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.806 12:11:55 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:42.806 12:11:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:42.806 12:11:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.806 12:11:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:42.806 12:11:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:42.806 12:11:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:42.806 12:11:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.806 12:11:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.806 12:11:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.806 12:11:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:42.806 12:11:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:42.806 12:11:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:42.806 12:11:55 -- common/autotest_common.sh@10 -- # set +x 00:17:50.944 12:12:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:50.944 12:12:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:50.944 12:12:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:50.944 12:12:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:50.944 12:12:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:50.944 12:12:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:50.944 12:12:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:50.944 12:12:02 -- nvmf/common.sh@294 -- # net_devs=() 00:17:50.944 12:12:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:50.944 12:12:02 -- nvmf/common.sh@295 -- # e810=() 00:17:50.944 12:12:02 -- nvmf/common.sh@295 -- # local -ga e810 00:17:50.944 12:12:02 -- nvmf/common.sh@296 -- # x722=() 00:17:50.944 12:12:02 -- nvmf/common.sh@296 -- # local -ga x722 00:17:50.944 12:12:02 -- nvmf/common.sh@297 -- # mlx=() 00:17:50.944 12:12:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:50.944 12:12:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.944 12:12:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:50.944 12:12:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:50.944 12:12:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:50.944 12:12:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:50.944 12:12:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:50.944 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:50.944 12:12:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:50.944 12:12:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:50.944 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:50.944 12:12:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:50.944 12:12:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:50.944 12:12:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.944 12:12:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:50.944 12:12:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.944 12:12:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:50.944 Found net devices under 0000:31:00.0: cvl_0_0 00:17:50.944 12:12:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.944 12:12:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:50.944 12:12:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.944 12:12:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:50.944 12:12:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.944 12:12:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:50.944 Found net devices under 0000:31:00.1: cvl_0_1 00:17:50.944 12:12:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.944 12:12:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:50.944 12:12:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:50.944 12:12:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:50.944 12:12:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:50.944 12:12:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.944 12:12:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.944 12:12:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.944 12:12:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:50.944 12:12:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.944 12:12:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.944 12:12:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:50.944 12:12:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.944 12:12:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.944 12:12:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:50.944 12:12:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:50.944 12:12:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.944 12:12:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.944 12:12:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.944 12:12:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.944 12:12:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:50.944 12:12:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.945 12:12:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.945 12:12:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.945 12:12:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:50.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:17:50.945 00:17:50.945 --- 10.0.0.2 ping statistics --- 00:17:50.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.945 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:17:50.945 12:12:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:17:50.945 00:17:50.945 --- 10.0.0.1 ping statistics --- 00:17:50.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.945 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:17:50.945 12:12:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.945 12:12:02 -- nvmf/common.sh@410 -- # return 0 00:17:50.945 12:12:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:50.945 12:12:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.945 12:12:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:50.945 12:12:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:50.945 12:12:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.945 12:12:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:50.945 12:12:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:50.945 12:12:03 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:50.945 12:12:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:50.945 12:12:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:50.945 12:12:03 -- common/autotest_common.sh@10 -- # set +x 00:17:50.945 12:12:03 -- nvmf/common.sh@469 -- # nvmfpid=1458492 00:17:50.945 12:12:03 -- nvmf/common.sh@470 -- # waitforlisten 1458492 00:17:50.945 12:12:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:50.945 12:12:03 -- common/autotest_common.sh@819 -- # '[' -z 1458492 ']' 00:17:50.945 12:12:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.945 12:12:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.945 12:12:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.945 12:12:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.945 12:12:03 -- common/autotest_common.sh@10 -- # set +x 00:17:50.945 [2024-06-11 12:12:03.073812] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:50.945 [2024-06-11 12:12:03.073875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.945 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.945 [2024-06-11 12:12:03.145261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.945 [2024-06-11 12:12:03.182306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:50.945 [2024-06-11 12:12:03.182444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.945 [2024-06-11 12:12:03.182454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.945 [2024-06-11 12:12:03.182461] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.945 [2024-06-11 12:12:03.182488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.945 12:12:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.945 12:12:03 -- common/autotest_common.sh@852 -- # return 0 00:17:50.945 12:12:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:50.945 12:12:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:50.945 12:12:03 -- common/autotest_common.sh@10 -- # set +x 00:17:50.945 12:12:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.945 12:12:03 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:51.205 [2024-06-11 12:12:04.008855] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:51.205 12:12:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:51.205 12:12:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.205 12:12:04 -- common/autotest_common.sh@10 -- # set +x 00:17:51.205 ************************************ 00:17:51.205 START TEST lvs_grow_clean 00:17:51.205 ************************************ 00:17:51.205 12:12:04 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.205 12:12:04 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.206 12:12:04 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:51.466 12:12:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:51.466 12:12:04 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:51.466 12:12:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=041944ee-843b-4480-88db-2d2c51038f56 00:17:51.466 12:12:04 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:17:51.466 12:12:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:51.726 12:12:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:51.726 12:12:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:51.726 12:12:04 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 041944ee-843b-4480-88db-2d2c51038f56 lvol 150 00:17:51.726 12:12:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a94b2aaf-bace-44f8-bedf-e7229e34c961 00:17:51.726 12:12:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.726 12:12:04 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:51.985 [2024-06-11 12:12:04.868101] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:51.985 [2024-06-11 12:12:04.868151] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:51.985 true 00:17:51.986 12:12:04 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:17:51.986 12:12:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:52.245 12:12:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:52.245 12:12:05 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:52.245 12:12:05 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a94b2aaf-bace-44f8-bedf-e7229e34c961 00:17:52.505 12:12:05 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:52.505 [2024-06-11 12:12:05.465962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.505 12:12:05 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:52.766 12:12:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1458911 00:17:52.766 12:12:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.766 12:12:05 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:52.766 12:12:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1458911 /var/tmp/bdevperf.sock 00:17:52.766 12:12:05 -- common/autotest_common.sh@819 -- # '[' -z 1458911 ']' 00:17:52.766 12:12:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.766 12:12:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.766 12:12:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.766 12:12:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.766 12:12:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.766 [2024-06-11 12:12:05.661706] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:52.766 [2024-06-11 12:12:05.661757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458911 ] 00:17:52.766 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.766 [2024-06-11 12:12:05.737130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.766 [2024-06-11 12:12:05.765935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.706 12:12:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:53.706 12:12:06 -- common/autotest_common.sh@852 -- # return 0 00:17:53.706 12:12:06 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:53.706 Nvme0n1 00:17:53.706 12:12:06 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:53.966 [ 00:17:53.966 { 00:17:53.966 "name": "Nvme0n1", 00:17:53.966 "aliases": [ 00:17:53.966 "a94b2aaf-bace-44f8-bedf-e7229e34c961" 00:17:53.966 ], 00:17:53.966 "product_name": "NVMe disk", 00:17:53.966 "block_size": 4096, 00:17:53.966 "num_blocks": 38912, 00:17:53.966 "uuid": "a94b2aaf-bace-44f8-bedf-e7229e34c961", 00:17:53.966 "assigned_rate_limits": { 00:17:53.966 "rw_ios_per_sec": 0, 00:17:53.966 "rw_mbytes_per_sec": 0, 00:17:53.966 "r_mbytes_per_sec": 0, 00:17:53.966 "w_mbytes_per_sec": 0 00:17:53.966 }, 00:17:53.966 "claimed": false, 00:17:53.966 "zoned": false, 00:17:53.966 "supported_io_types": { 00:17:53.966 "read": true, 00:17:53.967 "write": true, 00:17:53.967 "unmap": true, 00:17:53.967 "write_zeroes": true, 00:17:53.967 "flush": true, 00:17:53.967 "reset": true, 00:17:53.967 "compare": true, 00:17:53.967 "compare_and_write": true, 00:17:53.967 "abort": true, 00:17:53.967 "nvme_admin": true, 00:17:53.967 "nvme_io": true 00:17:53.967 }, 00:17:53.967 "driver_specific": { 00:17:53.967 "nvme": [ 00:17:53.967 { 00:17:53.967 "trid": { 00:17:53.967 "trtype": "TCP", 00:17:53.967 "adrfam": "IPv4", 00:17:53.967 "traddr": "10.0.0.2", 00:17:53.967 "trsvcid": "4420", 00:17:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:53.967 }, 00:17:53.967 "ctrlr_data": { 00:17:53.967 "cntlid": 1, 00:17:53.967 "vendor_id": "0x8086", 00:17:53.967 "model_number": "SPDK bdev Controller", 00:17:53.967 "serial_number": "SPDK0", 00:17:53.967 "firmware_revision": "24.01.1", 00:17:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:53.967 "oacs": { 00:17:53.967 "security": 0, 00:17:53.967 "format": 0, 00:17:53.967 "firmware": 0, 00:17:53.967 "ns_manage": 0 00:17:53.967 }, 00:17:53.967 "multi_ctrlr": true, 00:17:53.967 "ana_reporting": false 00:17:53.967 }, 00:17:53.967 "vs": { 00:17:53.967 "nvme_version": "1.3" 00:17:53.967 }, 00:17:53.967 "ns_data": { 00:17:53.967 "id": 1, 00:17:53.967 "can_share": true 00:17:53.967 } 00:17:53.967 } 00:17:53.967 ], 00:17:53.967 "mp_policy": "active_passive" 00:17:53.967 } 00:17:53.967 } 00:17:53.967 ] 00:17:53.967 12:12:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1459225 00:17:53.967 12:12:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:53.967 12:12:06 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:53.967 Running I/O for 10 seconds... 00:17:54.908 Latency(us) 00:17:54.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.909 Nvme0n1 : 1.00 18634.00 72.79 0.00 0.00 0.00 0.00 0.00 00:17:54.909 =================================================================================================================== 00:17:54.909 Total : 18634.00 72.79 0.00 0.00 0.00 0.00 0.00 00:17:54.909 00:17:55.853 12:12:08 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 041944ee-843b-4480-88db-2d2c51038f56 00:17:56.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.114 Nvme0n1 : 2.00 18751.00 73.25 0.00 0.00 0.00 0.00 0.00 00:17:56.114 =================================================================================================================== 00:17:56.114 Total : 18751.00 73.25 0.00 0.00 0.00 0.00 0.00 00:17:56.114 00:17:56.114 true 00:17:56.114 12:12:08 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:17:56.114 12:12:08 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:56.114 12:12:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:56.114 12:12:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:56.114 12:12:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 1459225 00:17:57.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.057 Nvme0n1 : 3.00 18796.67 73.42 0.00 0.00 0.00 0.00 0.00 00:17:57.057 =================================================================================================================== 00:17:57.057 Total : 18796.67 73.42 0.00 0.00 0.00 0.00 0.00 00:17:57.057 00:17:58.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.000 Nvme0n1 : 4.00 18828.00 73.55 0.00 0.00 0.00 0.00 0.00 00:17:58.000 =================================================================================================================== 00:17:58.000 Total : 18828.00 73.55 0.00 0.00 0.00 0.00 0.00 00:17:58.000 00:17:58.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.943 Nvme0n1 : 5.00 18863.00 73.68 0.00 0.00 0.00 0.00 0.00 00:17:58.943 =================================================================================================================== 00:17:58.943 Total : 18863.00 73.68 0.00 0.00 0.00 0.00 0.00 00:17:58.943 00:17:59.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.884 Nvme0n1 : 6.00 18886.50 73.78 0.00 0.00 0.00 0.00 0.00 00:17:59.884 =================================================================================================================== 00:17:59.884 Total : 18886.50 73.78 0.00 0.00 0.00 0.00 0.00 00:17:59.884 00:18:01.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.266 Nvme0n1 : 7.00 18902.29 73.84 0.00 0.00 0.00 0.00 0.00 00:18:01.266 =================================================================================================================== 00:18:01.266 Total : 18902.29 73.84 0.00 0.00 0.00 0.00 0.00 00:18:01.266 00:18:02.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.207 Nvme0n1 : 8.00 18915.25 73.89 0.00 0.00 0.00 0.00 0.00 00:18:02.207 =================================================================================================================== 00:18:02.207 Total : 18915.25 73.89 0.00 0.00 0.00 0.00 0.00 00:18:02.207 00:18:03.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.147 Nvme0n1 : 9.00 18925.11 73.93 0.00 0.00 0.00 0.00 0.00 00:18:03.147 =================================================================================================================== 00:18:03.147 Total : 18925.11 73.93 0.00 0.00 0.00 0.00 0.00 00:18:03.147 00:18:04.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.089 Nvme0n1 : 10.00 18938.40 73.98 0.00 0.00 0.00 0.00 0.00 00:18:04.089 =================================================================================================================== 00:18:04.089 Total : 18938.40 73.98 0.00 0.00 0.00 0.00 0.00 00:18:04.089 00:18:04.089 00:18:04.089 Latency(us) 00:18:04.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.089 Nvme0n1 : 10.01 18940.07 73.98 0.00 0.00 6754.03 3959.47 17039.36 00:18:04.089 =================================================================================================================== 00:18:04.089 Total : 18940.07 73.98 0.00 0.00 6754.03 3959.47 17039.36 00:18:04.089 0 00:18:04.089 12:12:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1458911 00:18:04.089 12:12:16 -- common/autotest_common.sh@926 -- # '[' -z 1458911 ']' 00:18:04.089 12:12:16 -- common/autotest_common.sh@930 -- # kill -0 1458911 00:18:04.089 12:12:16 -- common/autotest_common.sh@931 -- # uname 00:18:04.089 12:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:04.089 12:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1458911 00:18:04.089 12:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:04.089 12:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:04.089 12:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1458911' 00:18:04.089 killing process with pid 1458911 00:18:04.089 12:12:16 -- common/autotest_common.sh@945 -- # kill 1458911 00:18:04.089 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.089 00:18:04.089 Latency(us) 00:18:04.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.089 =================================================================================================================== 00:18:04.089 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.090 12:12:16 -- common/autotest_common.sh@950 -- # wait 1458911 00:18:04.090 12:12:17 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:04.350 12:12:17 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:18:04.350 12:12:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:04.611 12:12:17 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:04.611 12:12:17 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:04.611 12:12:17 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:04.611 [2024-06-11 12:12:17.553492] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:04.611 12:12:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:18:04.611 12:12:17 -- common/autotest_common.sh@640 -- # local es=0 00:18:04.611 12:12:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:18:04.611 12:12:17 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.611 12:12:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:04.611 12:12:17 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.611 12:12:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:04.611 12:12:17 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.611 12:12:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:04.611 12:12:17 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.611 12:12:17 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:04.611 12:12:17 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:18:04.871 request: 00:18:04.871 { 00:18:04.871 "uuid": "041944ee-843b-4480-88db-2d2c51038f56", 00:18:04.871 "method": "bdev_lvol_get_lvstores", 00:18:04.871 "req_id": 1 00:18:04.871 } 00:18:04.871 Got JSON-RPC error response 00:18:04.871 response: 00:18:04.871 { 00:18:04.871 "code": -19, 00:18:04.871 "message": "No such device" 00:18:04.871 } 00:18:04.871 12:12:17 -- common/autotest_common.sh@643 -- # es=1 00:18:04.871 12:12:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:04.871 12:12:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:04.872 12:12:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:04.872 12:12:17 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:04.872 aio_bdev 00:18:05.132 12:12:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a94b2aaf-bace-44f8-bedf-e7229e34c961 00:18:05.132 12:12:17 -- common/autotest_common.sh@887 -- # local bdev_name=a94b2aaf-bace-44f8-bedf-e7229e34c961 00:18:05.132 12:12:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:05.132 12:12:17 -- common/autotest_common.sh@889 -- # local i 00:18:05.132 12:12:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:05.132 12:12:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:05.132 12:12:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:05.132 12:12:18 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a94b2aaf-bace-44f8-bedf-e7229e34c961 -t 2000 00:18:05.447 [ 00:18:05.447 { 00:18:05.447 "name": "a94b2aaf-bace-44f8-bedf-e7229e34c961", 00:18:05.447 "aliases": [ 00:18:05.447 "lvs/lvol" 00:18:05.447 ], 00:18:05.447 "product_name": "Logical Volume", 00:18:05.447 "block_size": 4096, 00:18:05.447 "num_blocks": 38912, 00:18:05.447 "uuid": "a94b2aaf-bace-44f8-bedf-e7229e34c961", 00:18:05.447 "assigned_rate_limits": { 00:18:05.447 "rw_ios_per_sec": 0, 00:18:05.447 "rw_mbytes_per_sec": 0, 00:18:05.447 "r_mbytes_per_sec": 0, 00:18:05.447 "w_mbytes_per_sec": 0 00:18:05.447 }, 00:18:05.447 "claimed": false, 00:18:05.447 "zoned": false, 00:18:05.447 "supported_io_types": { 00:18:05.447 "read": true, 00:18:05.447 "write": true, 00:18:05.447 "unmap": true, 00:18:05.447 "write_zeroes": true, 00:18:05.447 "flush": false, 00:18:05.447 "reset": true, 00:18:05.447 "compare": false, 00:18:05.447 "compare_and_write": false, 00:18:05.447 "abort": false, 00:18:05.447 "nvme_admin": false, 00:18:05.447 "nvme_io": false 00:18:05.447 }, 00:18:05.447 "driver_specific": { 00:18:05.447 "lvol": { 00:18:05.447 "lvol_store_uuid": "041944ee-843b-4480-88db-2d2c51038f56", 00:18:05.447 "base_bdev": "aio_bdev", 00:18:05.447 "thin_provision": false, 00:18:05.447 "snapshot": false, 00:18:05.447 "clone": false, 00:18:05.447 "esnap_clone": false 00:18:05.447 } 00:18:05.447 } 00:18:05.447 } 00:18:05.447 ] 00:18:05.447 12:12:18 -- common/autotest_common.sh@895 -- # return 0 00:18:05.447 12:12:18 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:18:05.447 12:12:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:05.447 12:12:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:05.447 12:12:18 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 041944ee-843b-4480-88db-2d2c51038f56 00:18:05.447 12:12:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:05.732 12:12:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:05.732 12:12:18 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a94b2aaf-bace-44f8-bedf-e7229e34c961 00:18:05.732 12:12:18 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 041944ee-843b-4480-88db-2d2c51038f56 00:18:05.992 12:12:18 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:05.992 12:12:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:05.992 00:18:05.992 real 0m14.954s 00:18:05.992 user 0m14.638s 00:18:05.992 sys 0m1.277s 00:18:05.992 12:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.992 12:12:18 -- common/autotest_common.sh@10 -- # set +x 00:18:05.992 ************************************ 00:18:05.992 END TEST lvs_grow_clean 00:18:05.992 ************************************ 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:06.253 12:12:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:06.253 12:12:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:06.253 12:12:19 -- common/autotest_common.sh@10 -- # set +x 00:18:06.253 ************************************ 00:18:06.253 START TEST lvs_grow_dirty 00:18:06.253 ************************************ 00:18:06.253 12:12:19 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:06.253 12:12:19 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:06.515 12:12:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:06.515 12:12:19 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:06.515 12:12:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:06.515 12:12:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:06.515 12:12:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:06.515 12:12:19 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 650f29d9-b7aa-49e6-90f9-44d71286824e lvol 150 00:18:06.775 12:12:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9a06bb37-2919-463f-8dd9-7d3c7fb2d435 00:18:06.775 12:12:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:06.775 12:12:19 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:06.775 [2024-06-11 12:12:19.782330] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:06.775 [2024-06-11 12:12:19.782382] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:06.775 true 00:18:06.775 12:12:19 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:06.775 12:12:19 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:07.036 12:12:19 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:07.036 12:12:19 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:07.297 12:12:20 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a06bb37-2919-463f-8dd9-7d3c7fb2d435 00:18:07.297 12:12:20 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:07.556 12:12:20 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:07.556 12:12:20 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1462443 00:18:07.556 12:12:20 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.556 12:12:20 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:07.556 12:12:20 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1462443 /var/tmp/bdevperf.sock 00:18:07.556 12:12:20 -- common/autotest_common.sh@819 -- # '[' -z 1462443 ']' 00:18:07.556 12:12:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.556 12:12:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:07.557 12:12:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.557 12:12:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:07.557 12:12:20 -- common/autotest_common.sh@10 -- # set +x 00:18:07.816 [2024-06-11 12:12:20.596863] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:07.816 [2024-06-11 12:12:20.596916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462443 ] 00:18:07.816 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.816 [2024-06-11 12:12:20.673361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.816 [2024-06-11 12:12:20.700293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.386 12:12:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:08.386 12:12:21 -- common/autotest_common.sh@852 -- # return 0 00:18:08.386 12:12:21 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:08.646 Nvme0n1 00:18:08.646 12:12:21 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:08.907 [ 00:18:08.907 { 00:18:08.907 "name": "Nvme0n1", 00:18:08.907 "aliases": [ 00:18:08.907 "9a06bb37-2919-463f-8dd9-7d3c7fb2d435" 00:18:08.907 ], 00:18:08.907 "product_name": "NVMe disk", 00:18:08.907 "block_size": 4096, 00:18:08.907 "num_blocks": 38912, 00:18:08.907 "uuid": "9a06bb37-2919-463f-8dd9-7d3c7fb2d435", 00:18:08.907 "assigned_rate_limits": { 00:18:08.907 "rw_ios_per_sec": 0, 00:18:08.907 "rw_mbytes_per_sec": 0, 00:18:08.907 "r_mbytes_per_sec": 0, 00:18:08.907 "w_mbytes_per_sec": 0 00:18:08.907 }, 00:18:08.907 "claimed": false, 00:18:08.907 "zoned": false, 00:18:08.907 "supported_io_types": { 00:18:08.908 "read": true, 00:18:08.908 "write": true, 00:18:08.908 "unmap": true, 00:18:08.908 "write_zeroes": true, 00:18:08.908 "flush": true, 00:18:08.908 "reset": true, 00:18:08.908 "compare": true, 00:18:08.908 "compare_and_write": true, 00:18:08.908 "abort": true, 00:18:08.908 "nvme_admin": true, 00:18:08.908 "nvme_io": true 00:18:08.908 }, 00:18:08.908 "driver_specific": { 00:18:08.908 "nvme": [ 00:18:08.908 { 00:18:08.908 "trid": { 00:18:08.908 "trtype": "TCP", 00:18:08.908 "adrfam": "IPv4", 00:18:08.908 "traddr": "10.0.0.2", 00:18:08.908 "trsvcid": "4420", 00:18:08.908 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:08.908 }, 00:18:08.908 "ctrlr_data": { 00:18:08.908 "cntlid": 1, 00:18:08.908 "vendor_id": "0x8086", 00:18:08.908 "model_number": "SPDK bdev Controller", 00:18:08.908 "serial_number": "SPDK0", 00:18:08.908 "firmware_revision": "24.01.1", 00:18:08.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:08.908 "oacs": { 00:18:08.908 "security": 0, 00:18:08.908 "format": 0, 00:18:08.908 "firmware": 0, 00:18:08.908 "ns_manage": 0 00:18:08.908 }, 00:18:08.908 "multi_ctrlr": true, 00:18:08.908 "ana_reporting": false 00:18:08.908 }, 00:18:08.908 "vs": { 00:18:08.908 "nvme_version": "1.3" 00:18:08.908 }, 00:18:08.908 "ns_data": { 00:18:08.908 "id": 1, 00:18:08.908 "can_share": true 00:18:08.908 } 00:18:08.908 } 00:18:08.908 ], 00:18:08.908 "mp_policy": "active_passive" 00:18:08.908 } 00:18:08.908 } 00:18:08.908 ] 00:18:08.908 12:12:21 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1462714 00:18:08.908 12:12:21 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:08.908 12:12:21 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.908 Running I/O for 10 seconds... 00:18:09.848 Latency(us) 00:18:09.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.848 Nvme0n1 : 1.00 18762.00 73.29 0.00 0.00 0.00 0.00 0.00 00:18:09.848 =================================================================================================================== 00:18:09.848 Total : 18762.00 73.29 0.00 0.00 0.00 0.00 0.00 00:18:09.848 00:18:10.789 12:12:23 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:11.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.049 Nvme0n1 : 2.00 18825.00 73.54 0.00 0.00 0.00 0.00 0.00 00:18:11.049 =================================================================================================================== 00:18:11.049 Total : 18825.00 73.54 0.00 0.00 0.00 0.00 0.00 00:18:11.049 00:18:11.049 true 00:18:11.049 12:12:23 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:11.049 12:12:23 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:11.049 12:12:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:11.049 12:12:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:11.049 12:12:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 1462714 00:18:11.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.991 Nvme0n1 : 3.00 18859.00 73.67 0.00 0.00 0.00 0.00 0.00 00:18:11.991 =================================================================================================================== 00:18:11.991 Total : 18859.00 73.67 0.00 0.00 0.00 0.00 0.00 00:18:11.991 00:18:12.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.933 Nvme0n1 : 4.00 18884.50 73.77 0.00 0.00 0.00 0.00 0.00 00:18:12.933 =================================================================================================================== 00:18:12.933 Total : 18884.50 73.77 0.00 0.00 0.00 0.00 0.00 00:18:12.933 00:18:13.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.874 Nvme0n1 : 5.00 18905.40 73.85 0.00 0.00 0.00 0.00 0.00 00:18:13.874 =================================================================================================================== 00:18:13.874 Total : 18905.40 73.85 0.00 0.00 0.00 0.00 0.00 00:18:13.874 00:18:14.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.814 Nvme0n1 : 6.00 18922.33 73.92 0.00 0.00 0.00 0.00 0.00 00:18:14.814 =================================================================================================================== 00:18:14.814 Total : 18922.33 73.92 0.00 0.00 0.00 0.00 0.00 00:18:14.814 00:18:16.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.195 Nvme0n1 : 7.00 18925.43 73.93 0.00 0.00 0.00 0.00 0.00 00:18:16.195 =================================================================================================================== 00:18:16.195 Total : 18925.43 73.93 0.00 0.00 0.00 0.00 0.00 00:18:16.195 00:18:17.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.137 Nvme0n1 : 8.00 18935.75 73.97 0.00 0.00 0.00 0.00 0.00 00:18:17.137 =================================================================================================================== 00:18:17.137 Total : 18935.75 73.97 0.00 0.00 0.00 0.00 0.00 00:18:17.137 00:18:18.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.076 Nvme0n1 : 9.00 18936.22 73.97 0.00 0.00 0.00 0.00 0.00 00:18:18.076 =================================================================================================================== 00:18:18.076 Total : 18936.22 73.97 0.00 0.00 0.00 0.00 0.00 00:18:18.076 00:18:19.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.015 Nvme0n1 : 10.00 18943.40 74.00 0.00 0.00 0.00 0.00 0.00 00:18:19.015 =================================================================================================================== 00:18:19.015 Total : 18943.40 74.00 0.00 0.00 0.00 0.00 0.00 00:18:19.015 00:18:19.015 00:18:19.015 Latency(us) 00:18:19.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.015 Nvme0n1 : 10.00 18946.86 74.01 0.00 0.00 6751.99 1488.21 10431.15 00:18:19.015 =================================================================================================================== 00:18:19.015 Total : 18946.86 74.01 0.00 0.00 6751.99 1488.21 10431.15 00:18:19.015 0 00:18:19.015 12:12:31 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1462443 00:18:19.015 12:12:31 -- common/autotest_common.sh@926 -- # '[' -z 1462443 ']' 00:18:19.015 12:12:31 -- common/autotest_common.sh@930 -- # kill -0 1462443 00:18:19.015 12:12:31 -- common/autotest_common.sh@931 -- # uname 00:18:19.015 12:12:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.015 12:12:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1462443 00:18:19.015 12:12:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:19.015 12:12:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:19.015 12:12:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1462443' 00:18:19.015 killing process with pid 1462443 00:18:19.015 12:12:31 -- common/autotest_common.sh@945 -- # kill 1462443 00:18:19.015 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.015 00:18:19.015 Latency(us) 00:18:19.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.015 =================================================================================================================== 00:18:19.015 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.015 12:12:31 -- common/autotest_common.sh@950 -- # wait 1462443 00:18:19.015 12:12:32 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:19.273 12:12:32 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:19.273 12:12:32 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:19.532 12:12:32 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:19.532 12:12:32 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:19.532 12:12:32 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1458492 00:18:19.532 12:12:32 -- target/nvmf_lvs_grow.sh@74 -- # wait 1458492 00:18:19.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1458492 Killed "${NVMF_APP[@]}" "$@" 00:18:19.532 12:12:32 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:19.532 12:12:32 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:19.532 12:12:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:19.532 12:12:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:19.532 12:12:32 -- common/autotest_common.sh@10 -- # set +x 00:18:19.533 12:12:32 -- nvmf/common.sh@469 -- # nvmfpid=1464819 00:18:19.533 12:12:32 -- nvmf/common.sh@470 -- # waitforlisten 1464819 00:18:19.533 12:12:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:19.533 12:12:32 -- common/autotest_common.sh@819 -- # '[' -z 1464819 ']' 00:18:19.533 12:12:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.533 12:12:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:19.533 12:12:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.533 12:12:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:19.533 12:12:32 -- common/autotest_common.sh@10 -- # set +x 00:18:19.533 [2024-06-11 12:12:32.477951] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:19.533 [2024-06-11 12:12:32.478005] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.533 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.533 [2024-06-11 12:12:32.543673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.792 [2024-06-11 12:12:32.572239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.792 [2024-06-11 12:12:32.572356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.792 [2024-06-11 12:12:32.572364] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.792 [2024-06-11 12:12:32.572371] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.792 [2024-06-11 12:12:32.572393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.362 12:12:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:20.362 12:12:33 -- common/autotest_common.sh@852 -- # return 0 00:18:20.362 12:12:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.362 12:12:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:20.362 12:12:33 -- common/autotest_common.sh@10 -- # set +x 00:18:20.362 12:12:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.362 12:12:33 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:20.363 [2024-06-11 12:12:33.394741] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:20.363 [2024-06-11 12:12:33.394828] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:20.363 [2024-06-11 12:12:33.394857] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:20.623 12:12:33 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:20.623 12:12:33 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 9a06bb37-2919-463f-8dd9-7d3c7fb2d435 00:18:20.623 12:12:33 -- common/autotest_common.sh@887 -- # local bdev_name=9a06bb37-2919-463f-8dd9-7d3c7fb2d435 00:18:20.623 12:12:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:20.623 12:12:33 -- common/autotest_common.sh@889 -- # local i 00:18:20.623 12:12:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:20.623 12:12:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:20.623 12:12:33 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:20.623 12:12:33 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9a06bb37-2919-463f-8dd9-7d3c7fb2d435 -t 2000 00:18:20.884 [ 00:18:20.884 { 00:18:20.884 "name": "9a06bb37-2919-463f-8dd9-7d3c7fb2d435", 00:18:20.884 "aliases": [ 00:18:20.884 "lvs/lvol" 00:18:20.884 ], 00:18:20.884 "product_name": "Logical Volume", 00:18:20.884 "block_size": 4096, 00:18:20.884 "num_blocks": 38912, 00:18:20.884 "uuid": "9a06bb37-2919-463f-8dd9-7d3c7fb2d435", 00:18:20.884 "assigned_rate_limits": { 00:18:20.884 "rw_ios_per_sec": 0, 00:18:20.884 "rw_mbytes_per_sec": 0, 00:18:20.884 "r_mbytes_per_sec": 0, 00:18:20.884 "w_mbytes_per_sec": 0 00:18:20.884 }, 00:18:20.884 "claimed": false, 00:18:20.884 "zoned": false, 00:18:20.884 "supported_io_types": { 00:18:20.884 "read": true, 00:18:20.884 "write": true, 00:18:20.884 "unmap": true, 00:18:20.884 "write_zeroes": true, 00:18:20.884 "flush": false, 00:18:20.884 "reset": true, 00:18:20.884 "compare": false, 00:18:20.884 "compare_and_write": false, 00:18:20.884 "abort": false, 00:18:20.884 "nvme_admin": false, 00:18:20.884 "nvme_io": false 00:18:20.884 }, 00:18:20.884 "driver_specific": { 00:18:20.884 "lvol": { 00:18:20.884 "lvol_store_uuid": "650f29d9-b7aa-49e6-90f9-44d71286824e", 00:18:20.884 "base_bdev": "aio_bdev", 00:18:20.884 "thin_provision": false, 00:18:20.884 "snapshot": false, 00:18:20.884 "clone": false, 00:18:20.884 "esnap_clone": false 00:18:20.884 } 00:18:20.884 } 00:18:20.884 } 00:18:20.884 ] 00:18:20.884 12:12:33 -- common/autotest_common.sh@895 -- # return 0 00:18:20.884 12:12:33 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:20.884 12:12:33 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:20.884 12:12:33 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:20.884 12:12:33 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:20.884 12:12:33 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:21.146 12:12:34 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:21.146 12:12:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:21.146 [2024-06-11 12:12:34.150652] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:21.146 12:12:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:21.146 12:12:34 -- common/autotest_common.sh@640 -- # local es=0 00:18:21.146 12:12:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:21.146 12:12:34 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.408 12:12:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:21.408 12:12:34 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.408 12:12:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:21.408 12:12:34 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.408 12:12:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:21.408 12:12:34 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.408 12:12:34 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:21.408 12:12:34 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:21.408 request: 00:18:21.408 { 00:18:21.408 "uuid": "650f29d9-b7aa-49e6-90f9-44d71286824e", 00:18:21.408 "method": "bdev_lvol_get_lvstores", 00:18:21.408 "req_id": 1 00:18:21.408 } 00:18:21.408 Got JSON-RPC error response 00:18:21.408 response: 00:18:21.408 { 00:18:21.408 "code": -19, 00:18:21.408 "message": "No such device" 00:18:21.408 } 00:18:21.408 12:12:34 -- common/autotest_common.sh@643 -- # es=1 00:18:21.408 12:12:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:21.408 12:12:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:21.408 12:12:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:21.408 12:12:34 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:21.669 aio_bdev 00:18:21.669 12:12:34 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9a06bb37-2919-463f-8dd9-7d3c7fb2d435 00:18:21.669 12:12:34 -- common/autotest_common.sh@887 -- # local bdev_name=9a06bb37-2919-463f-8dd9-7d3c7fb2d435 00:18:21.669 12:12:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:21.669 12:12:34 -- common/autotest_common.sh@889 -- # local i 00:18:21.669 12:12:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:21.669 12:12:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:21.669 12:12:34 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:21.669 12:12:34 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9a06bb37-2919-463f-8dd9-7d3c7fb2d435 -t 2000 00:18:21.930 [ 00:18:21.930 { 00:18:21.930 "name": "9a06bb37-2919-463f-8dd9-7d3c7fb2d435", 00:18:21.930 "aliases": [ 00:18:21.930 "lvs/lvol" 00:18:21.930 ], 00:18:21.930 "product_name": "Logical Volume", 00:18:21.930 "block_size": 4096, 00:18:21.930 "num_blocks": 38912, 00:18:21.930 "uuid": "9a06bb37-2919-463f-8dd9-7d3c7fb2d435", 00:18:21.930 "assigned_rate_limits": { 00:18:21.930 "rw_ios_per_sec": 0, 00:18:21.930 "rw_mbytes_per_sec": 0, 00:18:21.930 "r_mbytes_per_sec": 0, 00:18:21.930 "w_mbytes_per_sec": 0 00:18:21.930 }, 00:18:21.930 "claimed": false, 00:18:21.930 "zoned": false, 00:18:21.930 "supported_io_types": { 00:18:21.930 "read": true, 00:18:21.930 "write": true, 00:18:21.930 "unmap": true, 00:18:21.930 "write_zeroes": true, 00:18:21.930 "flush": false, 00:18:21.930 "reset": true, 00:18:21.930 "compare": false, 00:18:21.930 "compare_and_write": false, 00:18:21.930 "abort": false, 00:18:21.930 "nvme_admin": false, 00:18:21.930 "nvme_io": false 00:18:21.930 }, 00:18:21.930 "driver_specific": { 00:18:21.930 "lvol": { 00:18:21.930 "lvol_store_uuid": "650f29d9-b7aa-49e6-90f9-44d71286824e", 00:18:21.930 "base_bdev": "aio_bdev", 00:18:21.930 "thin_provision": false, 00:18:21.930 "snapshot": false, 00:18:21.930 "clone": false, 00:18:21.930 "esnap_clone": false 00:18:21.930 } 00:18:21.930 } 00:18:21.930 } 00:18:21.930 ] 00:18:21.930 12:12:34 -- common/autotest_common.sh@895 -- # return 0 00:18:21.930 12:12:34 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:21.930 12:12:34 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:21.930 12:12:34 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:21.930 12:12:34 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:21.930 12:12:34 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:22.190 12:12:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:22.190 12:12:35 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9a06bb37-2919-463f-8dd9-7d3c7fb2d435 00:18:22.451 12:12:35 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 650f29d9-b7aa-49e6-90f9-44d71286824e 00:18:22.451 12:12:35 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:22.713 12:12:35 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:22.713 00:18:22.713 real 0m16.544s 00:18:22.713 user 0m43.355s 00:18:22.713 sys 0m2.761s 00:18:22.713 12:12:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.713 12:12:35 -- common/autotest_common.sh@10 -- # set +x 00:18:22.713 ************************************ 00:18:22.713 END TEST lvs_grow_dirty 00:18:22.713 ************************************ 00:18:22.713 12:12:35 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:22.713 12:12:35 -- common/autotest_common.sh@796 -- # type=--id 00:18:22.713 12:12:35 -- common/autotest_common.sh@797 -- # id=0 00:18:22.713 12:12:35 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:22.713 12:12:35 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:22.713 12:12:35 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:22.713 12:12:35 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:22.713 12:12:35 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:22.713 12:12:35 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:22.713 nvmf_trace.0 00:18:22.713 12:12:35 -- common/autotest_common.sh@811 -- # return 0 00:18:22.713 12:12:35 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:22.713 12:12:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:22.713 12:12:35 -- nvmf/common.sh@116 -- # sync 00:18:22.713 12:12:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:22.713 12:12:35 -- nvmf/common.sh@119 -- # set +e 00:18:22.713 12:12:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:22.713 12:12:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:22.713 rmmod nvme_tcp 00:18:22.713 rmmod nvme_fabrics 00:18:22.713 rmmod nvme_keyring 00:18:22.713 12:12:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.713 12:12:35 -- nvmf/common.sh@123 -- # set -e 00:18:22.713 12:12:35 -- nvmf/common.sh@124 -- # return 0 00:18:22.713 12:12:35 -- nvmf/common.sh@477 -- # '[' -n 1464819 ']' 00:18:22.713 12:12:35 -- nvmf/common.sh@478 -- # killprocess 1464819 00:18:22.713 12:12:35 -- common/autotest_common.sh@926 -- # '[' -z 1464819 ']' 00:18:22.713 12:12:35 -- common/autotest_common.sh@930 -- # kill -0 1464819 00:18:22.974 12:12:35 -- common/autotest_common.sh@931 -- # uname 00:18:22.974 12:12:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:22.974 12:12:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1464819 00:18:22.974 12:12:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:22.974 12:12:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:22.974 12:12:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1464819' 00:18:22.974 killing process with pid 1464819 00:18:22.974 12:12:35 -- common/autotest_common.sh@945 -- # kill 1464819 00:18:22.974 12:12:35 -- common/autotest_common.sh@950 -- # wait 1464819 00:18:22.974 12:12:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.974 12:12:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.974 12:12:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.974 12:12:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.974 12:12:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.974 12:12:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.974 12:12:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.974 12:12:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.519 12:12:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:25.519 00:18:25.519 real 0m42.337s 00:18:25.519 user 1m3.824s 00:18:25.519 sys 0m9.750s 00:18:25.519 12:12:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.519 12:12:37 -- common/autotest_common.sh@10 -- # set +x 00:18:25.519 ************************************ 00:18:25.519 END TEST nvmf_lvs_grow 00:18:25.519 ************************************ 00:18:25.519 12:12:38 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:25.519 12:12:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:25.519 12:12:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.519 12:12:38 -- common/autotest_common.sh@10 -- # set +x 00:18:25.519 ************************************ 00:18:25.519 START TEST nvmf_bdev_io_wait 00:18:25.519 ************************************ 00:18:25.519 12:12:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:25.519 * Looking for test storage... 00:18:25.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.519 12:12:38 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.519 12:12:38 -- nvmf/common.sh@7 -- # uname -s 00:18:25.519 12:12:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.519 12:12:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.519 12:12:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.519 12:12:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.519 12:12:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.519 12:12:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.519 12:12:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.519 12:12:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.519 12:12:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.519 12:12:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.520 12:12:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.520 12:12:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.520 12:12:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.520 12:12:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.520 12:12:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.520 12:12:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.520 12:12:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.520 12:12:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.520 12:12:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.520 12:12:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.520 12:12:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.520 12:12:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.520 12:12:38 -- paths/export.sh@5 -- # export PATH 00:18:25.520 12:12:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.520 12:12:38 -- nvmf/common.sh@46 -- # : 0 00:18:25.520 12:12:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.520 12:12:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.520 12:12:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.520 12:12:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.520 12:12:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.520 12:12:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.520 12:12:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.520 12:12:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.520 12:12:38 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.520 12:12:38 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.520 12:12:38 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:25.520 12:12:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:25.520 12:12:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.520 12:12:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.520 12:12:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.520 12:12:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.520 12:12:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.520 12:12:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.520 12:12:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.520 12:12:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.520 12:12:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.520 12:12:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.520 12:12:38 -- common/autotest_common.sh@10 -- # set +x 00:18:32.104 12:12:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:32.104 12:12:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:32.104 12:12:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:32.104 12:12:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:32.104 12:12:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:32.104 12:12:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:32.104 12:12:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:32.104 12:12:45 -- nvmf/common.sh@294 -- # net_devs=() 00:18:32.104 12:12:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:32.104 12:12:45 -- nvmf/common.sh@295 -- # e810=() 00:18:32.104 12:12:45 -- nvmf/common.sh@295 -- # local -ga e810 00:18:32.104 12:12:45 -- nvmf/common.sh@296 -- # x722=() 00:18:32.104 12:12:45 -- nvmf/common.sh@296 -- # local -ga x722 00:18:32.104 12:12:45 -- nvmf/common.sh@297 -- # mlx=() 00:18:32.104 12:12:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:32.104 12:12:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.104 12:12:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:32.104 12:12:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:32.104 12:12:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:32.104 12:12:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.104 12:12:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:32.104 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:32.104 12:12:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.104 12:12:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:32.104 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:32.104 12:12:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:32.104 12:12:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:32.104 12:12:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.104 12:12:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.104 12:12:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.104 12:12:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.104 12:12:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:32.104 Found net devices under 0000:31:00.0: cvl_0_0 00:18:32.104 12:12:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.104 12:12:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.104 12:12:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.104 12:12:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.104 12:12:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.104 12:12:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:32.104 Found net devices under 0000:31:00.1: cvl_0_1 00:18:32.104 12:12:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.104 12:12:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:32.364 12:12:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:32.364 12:12:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:32.364 12:12:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:32.364 12:12:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:32.364 12:12:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.364 12:12:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.364 12:12:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.364 12:12:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:32.364 12:12:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.364 12:12:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.364 12:12:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:32.364 12:12:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.364 12:12:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.364 12:12:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:32.364 12:12:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:32.364 12:12:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.364 12:12:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.364 12:12:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.364 12:12:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.364 12:12:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:32.364 12:12:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.625 12:12:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.625 12:12:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.625 12:12:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:32.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:18:32.625 00:18:32.625 --- 10.0.0.2 ping statistics --- 00:18:32.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.625 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:18:32.625 12:12:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:18:32.625 00:18:32.625 --- 10.0.0.1 ping statistics --- 00:18:32.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.625 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:18:32.625 12:12:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.625 12:12:45 -- nvmf/common.sh@410 -- # return 0 00:18:32.625 12:12:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.625 12:12:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.625 12:12:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.625 12:12:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.625 12:12:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.625 12:12:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.625 12:12:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.625 12:12:45 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:32.625 12:12:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.625 12:12:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:32.625 12:12:45 -- common/autotest_common.sh@10 -- # set +x 00:18:32.625 12:12:45 -- nvmf/common.sh@469 -- # nvmfpid=1469641 00:18:32.625 12:12:45 -- nvmf/common.sh@470 -- # waitforlisten 1469641 00:18:32.625 12:12:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:32.625 12:12:45 -- common/autotest_common.sh@819 -- # '[' -z 1469641 ']' 00:18:32.625 12:12:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.625 12:12:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.625 12:12:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.625 12:12:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.625 12:12:45 -- common/autotest_common.sh@10 -- # set +x 00:18:32.625 [2024-06-11 12:12:45.527105] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:32.625 [2024-06-11 12:12:45.527154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.625 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.625 [2024-06-11 12:12:45.593310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.625 [2024-06-11 12:12:45.623697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:32.625 [2024-06-11 12:12:45.623832] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.625 [2024-06-11 12:12:45.623843] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.625 [2024-06-11 12:12:45.623851] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.625 [2024-06-11 12:12:45.623992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.625 [2024-06-11 12:12:45.624094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.625 [2024-06-11 12:12:45.624368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.625 [2024-06-11 12:12:45.624369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.566 12:12:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.566 12:12:46 -- common/autotest_common.sh@852 -- # return 0 00:18:33.566 12:12:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:33.566 12:12:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 12:12:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:33.566 12:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 12:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:33.566 12:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 12:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:33.566 12:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 [2024-06-11 12:12:46.392565] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.566 12:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:33.566 12:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 Malloc0 00:18:33.566 12:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:33.566 12:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 12:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.566 12:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 12:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.566 12:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.566 12:12:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.566 [2024-06-11 12:12:46.458304] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.566 12:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1469996 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@30 -- # READ_PID=1469998 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:33.566 12:12:46 -- nvmf/common.sh@520 -- # config=() 00:18:33.566 12:12:46 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.566 12:12:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.566 12:12:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.566 { 00:18:33.566 "params": { 00:18:33.566 "name": "Nvme$subsystem", 00:18:33.566 "trtype": "$TEST_TRANSPORT", 00:18:33.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.566 "adrfam": "ipv4", 00:18:33.566 "trsvcid": "$NVMF_PORT", 00:18:33.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.566 "hdgst": ${hdgst:-false}, 00:18:33.566 "ddgst": ${ddgst:-false} 00:18:33.566 }, 00:18:33.566 "method": "bdev_nvme_attach_controller" 00:18:33.566 } 00:18:33.566 EOF 00:18:33.566 )") 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1470000 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:33.566 12:12:46 -- nvmf/common.sh@520 -- # config=() 00:18:33.566 12:12:46 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.566 12:12:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1470003 00:18:33.566 12:12:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.566 { 00:18:33.566 "params": { 00:18:33.566 "name": "Nvme$subsystem", 00:18:33.566 "trtype": "$TEST_TRANSPORT", 00:18:33.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.566 "adrfam": "ipv4", 00:18:33.566 "trsvcid": "$NVMF_PORT", 00:18:33.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.566 "hdgst": ${hdgst:-false}, 00:18:33.566 "ddgst": ${ddgst:-false} 00:18:33.566 }, 00:18:33.566 "method": "bdev_nvme_attach_controller" 00:18:33.566 } 00:18:33.566 EOF 00:18:33.566 )") 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@35 -- # sync 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:33.566 12:12:46 -- nvmf/common.sh@520 -- # config=() 00:18:33.566 12:12:46 -- nvmf/common.sh@542 -- # cat 00:18:33.566 12:12:46 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.566 12:12:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:33.566 12:12:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.566 { 00:18:33.566 "params": { 00:18:33.566 "name": "Nvme$subsystem", 00:18:33.566 "trtype": "$TEST_TRANSPORT", 00:18:33.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.566 "adrfam": "ipv4", 00:18:33.566 "trsvcid": "$NVMF_PORT", 00:18:33.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.566 "hdgst": ${hdgst:-false}, 00:18:33.566 "ddgst": ${ddgst:-false} 00:18:33.566 }, 00:18:33.566 "method": "bdev_nvme_attach_controller" 00:18:33.566 } 00:18:33.566 EOF 00:18:33.566 )") 00:18:33.566 12:12:46 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:33.566 12:12:46 -- nvmf/common.sh@520 -- # config=() 00:18:33.567 12:12:46 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.567 12:12:46 -- nvmf/common.sh@542 -- # cat 00:18:33.567 12:12:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.567 12:12:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.567 { 00:18:33.567 "params": { 00:18:33.567 "name": "Nvme$subsystem", 00:18:33.567 "trtype": "$TEST_TRANSPORT", 00:18:33.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.567 "adrfam": "ipv4", 00:18:33.567 "trsvcid": "$NVMF_PORT", 00:18:33.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.567 "hdgst": ${hdgst:-false}, 00:18:33.567 "ddgst": ${ddgst:-false} 00:18:33.567 }, 00:18:33.567 "method": "bdev_nvme_attach_controller" 00:18:33.567 } 00:18:33.567 EOF 00:18:33.567 )") 00:18:33.567 12:12:46 -- nvmf/common.sh@542 -- # cat 00:18:33.567 12:12:46 -- target/bdev_io_wait.sh@37 -- # wait 1469996 00:18:33.567 12:12:46 -- nvmf/common.sh@542 -- # cat 00:18:33.567 12:12:46 -- nvmf/common.sh@544 -- # jq . 00:18:33.567 12:12:46 -- nvmf/common.sh@544 -- # jq . 00:18:33.567 12:12:46 -- nvmf/common.sh@544 -- # jq . 00:18:33.567 12:12:46 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.567 12:12:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.567 "params": { 00:18:33.567 "name": "Nvme1", 00:18:33.567 "trtype": "tcp", 00:18:33.567 "traddr": "10.0.0.2", 00:18:33.567 "adrfam": "ipv4", 00:18:33.567 "trsvcid": "4420", 00:18:33.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.567 "hdgst": false, 00:18:33.567 "ddgst": false 00:18:33.567 }, 00:18:33.567 "method": "bdev_nvme_attach_controller" 00:18:33.567 }' 00:18:33.567 12:12:46 -- nvmf/common.sh@544 -- # jq . 00:18:33.567 12:12:46 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.567 12:12:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.567 "params": { 00:18:33.567 "name": "Nvme1", 00:18:33.567 "trtype": "tcp", 00:18:33.567 "traddr": "10.0.0.2", 00:18:33.567 "adrfam": "ipv4", 00:18:33.567 "trsvcid": "4420", 00:18:33.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.567 "hdgst": false, 00:18:33.567 "ddgst": false 00:18:33.567 }, 00:18:33.567 "method": "bdev_nvme_attach_controller" 00:18:33.567 }' 00:18:33.567 12:12:46 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.567 12:12:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.567 "params": { 00:18:33.567 "name": "Nvme1", 00:18:33.567 "trtype": "tcp", 00:18:33.567 "traddr": "10.0.0.2", 00:18:33.567 "adrfam": "ipv4", 00:18:33.567 "trsvcid": "4420", 00:18:33.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.567 "hdgst": false, 00:18:33.567 "ddgst": false 00:18:33.567 }, 00:18:33.567 "method": "bdev_nvme_attach_controller" 00:18:33.567 }' 00:18:33.567 12:12:46 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.567 12:12:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.567 "params": { 00:18:33.567 "name": "Nvme1", 00:18:33.567 "trtype": "tcp", 00:18:33.567 "traddr": "10.0.0.2", 00:18:33.567 "adrfam": "ipv4", 00:18:33.567 "trsvcid": "4420", 00:18:33.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.567 "hdgst": false, 00:18:33.567 "ddgst": false 00:18:33.567 }, 00:18:33.567 "method": "bdev_nvme_attach_controller" 00:18:33.567 }' 00:18:33.567 [2024-06-11 12:12:46.509947] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:33.567 [2024-06-11 12:12:46.509999] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:33.567 [2024-06-11 12:12:46.511044] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:33.567 [2024-06-11 12:12:46.511046] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:33.567 [2024-06-11 12:12:46.511093] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-06-11 12:12:46.511093] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:33.567 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:33.567 [2024-06-11 12:12:46.511662] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:33.567 [2024-06-11 12:12:46.511706] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:33.567 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.827 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.827 [2024-06-11 12:12:46.657091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.827 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.827 [2024-06-11 12:12:46.673126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:33.827 [2024-06-11 12:12:46.716119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.827 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.827 [2024-06-11 12:12:46.733435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:33.827 [2024-06-11 12:12:46.760936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.827 [2024-06-11 12:12:46.776799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:33.827 [2024-06-11 12:12:46.791152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.827 [2024-06-11 12:12:46.807390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:34.087 Running I/O for 1 seconds... 00:18:34.087 Running I/O for 1 seconds... 00:18:34.088 Running I/O for 1 seconds... 00:18:34.088 Running I/O for 1 seconds... 00:18:35.027 00:18:35.027 Latency(us) 00:18:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.027 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:35.027 Nvme1n1 : 1.00 190024.91 742.28 0.00 0.00 670.96 267.95 764.59 00:18:35.027 =================================================================================================================== 00:18:35.027 Total : 190024.91 742.28 0.00 0.00 670.96 267.95 764.59 00:18:35.027 00:18:35.027 Latency(us) 00:18:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.027 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:35.027 Nvme1n1 : 1.01 7915.90 30.92 0.00 0.00 16045.18 6853.97 25012.91 00:18:35.027 =================================================================================================================== 00:18:35.027 Total : 7915.90 30.92 0.00 0.00 16045.18 6853.97 25012.91 00:18:35.027 00:18:35.027 Latency(us) 00:18:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.027 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:35.027 Nvme1n1 : 1.00 7937.45 31.01 0.00 0.00 16088.66 4041.39 36700.16 00:18:35.027 =================================================================================================================== 00:18:35.027 Total : 7937.45 31.01 0.00 0.00 16088.66 4041.39 36700.16 00:18:35.027 00:18:35.027 Latency(us) 00:18:35.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.027 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:35.027 Nvme1n1 : 1.01 13454.81 52.56 0.00 0.00 9483.36 5079.04 21626.88 00:18:35.027 =================================================================================================================== 00:18:35.027 Total : 13454.81 52.56 0.00 0.00 9483.36 5079.04 21626.88 00:18:35.288 12:12:48 -- target/bdev_io_wait.sh@38 -- # wait 1469998 00:18:35.288 12:12:48 -- target/bdev_io_wait.sh@39 -- # wait 1470000 00:18:35.288 12:12:48 -- target/bdev_io_wait.sh@40 -- # wait 1470003 00:18:35.288 12:12:48 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.288 12:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.288 12:12:48 -- common/autotest_common.sh@10 -- # set +x 00:18:35.288 12:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.288 12:12:48 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:35.288 12:12:48 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:35.288 12:12:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.288 12:12:48 -- nvmf/common.sh@116 -- # sync 00:18:35.288 12:12:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:35.288 12:12:48 -- nvmf/common.sh@119 -- # set +e 00:18:35.288 12:12:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.288 12:12:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:35.288 rmmod nvme_tcp 00:18:35.288 rmmod nvme_fabrics 00:18:35.288 rmmod nvme_keyring 00:18:35.288 12:12:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.288 12:12:48 -- nvmf/common.sh@123 -- # set -e 00:18:35.288 12:12:48 -- nvmf/common.sh@124 -- # return 0 00:18:35.288 12:12:48 -- nvmf/common.sh@477 -- # '[' -n 1469641 ']' 00:18:35.288 12:12:48 -- nvmf/common.sh@478 -- # killprocess 1469641 00:18:35.288 12:12:48 -- common/autotest_common.sh@926 -- # '[' -z 1469641 ']' 00:18:35.288 12:12:48 -- common/autotest_common.sh@930 -- # kill -0 1469641 00:18:35.288 12:12:48 -- common/autotest_common.sh@931 -- # uname 00:18:35.288 12:12:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:35.288 12:12:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1469641 00:18:35.552 12:12:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:35.552 12:12:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:35.552 12:12:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1469641' 00:18:35.552 killing process with pid 1469641 00:18:35.552 12:12:48 -- common/autotest_common.sh@945 -- # kill 1469641 00:18:35.552 12:12:48 -- common/autotest_common.sh@950 -- # wait 1469641 00:18:35.552 12:12:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:35.552 12:12:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:35.552 12:12:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:35.552 12:12:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.552 12:12:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:35.552 12:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.552 12:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.552 12:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.520 12:12:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:37.520 00:18:37.520 real 0m12.473s 00:18:37.520 user 0m18.502s 00:18:37.520 sys 0m6.657s 00:18:37.520 12:12:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.520 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:18:37.520 ************************************ 00:18:37.520 END TEST nvmf_bdev_io_wait 00:18:37.520 ************************************ 00:18:37.520 12:12:50 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:37.520 12:12:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:37.520 12:12:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:37.520 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:18:37.781 ************************************ 00:18:37.781 START TEST nvmf_queue_depth 00:18:37.781 ************************************ 00:18:37.781 12:12:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:37.781 * Looking for test storage... 00:18:37.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.781 12:12:50 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.781 12:12:50 -- nvmf/common.sh@7 -- # uname -s 00:18:37.781 12:12:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.781 12:12:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.781 12:12:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.781 12:12:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.781 12:12:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.781 12:12:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.781 12:12:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.781 12:12:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.781 12:12:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.781 12:12:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.781 12:12:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.781 12:12:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.781 12:12:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.781 12:12:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.781 12:12:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.781 12:12:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.781 12:12:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.781 12:12:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.781 12:12:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.781 12:12:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.781 12:12:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.781 12:12:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.781 12:12:50 -- paths/export.sh@5 -- # export PATH 00:18:37.781 12:12:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.781 12:12:50 -- nvmf/common.sh@46 -- # : 0 00:18:37.781 12:12:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:37.781 12:12:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:37.781 12:12:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:37.781 12:12:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.781 12:12:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.781 12:12:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:37.781 12:12:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:37.781 12:12:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:37.781 12:12:50 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:37.781 12:12:50 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:37.781 12:12:50 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.781 12:12:50 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:37.781 12:12:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:37.781 12:12:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.782 12:12:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:37.782 12:12:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:37.782 12:12:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:37.782 12:12:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.782 12:12:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.782 12:12:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.782 12:12:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:37.782 12:12:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:37.782 12:12:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:37.782 12:12:50 -- common/autotest_common.sh@10 -- # set +x 00:18:45.925 12:12:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:45.925 12:12:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:45.925 12:12:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:45.925 12:12:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:45.925 12:12:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:45.925 12:12:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:45.925 12:12:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:45.925 12:12:57 -- nvmf/common.sh@294 -- # net_devs=() 00:18:45.925 12:12:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:45.925 12:12:57 -- nvmf/common.sh@295 -- # e810=() 00:18:45.925 12:12:57 -- nvmf/common.sh@295 -- # local -ga e810 00:18:45.925 12:12:57 -- nvmf/common.sh@296 -- # x722=() 00:18:45.925 12:12:57 -- nvmf/common.sh@296 -- # local -ga x722 00:18:45.925 12:12:57 -- nvmf/common.sh@297 -- # mlx=() 00:18:45.925 12:12:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:45.925 12:12:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.925 12:12:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:45.925 12:12:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:45.925 12:12:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:45.925 12:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:45.925 12:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:45.925 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:45.925 12:12:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:45.925 12:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:45.925 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:45.925 12:12:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:45.925 12:12:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:45.925 12:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.925 12:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:45.925 12:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.925 12:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:45.925 Found net devices under 0000:31:00.0: cvl_0_0 00:18:45.925 12:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.925 12:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:45.925 12:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.925 12:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:45.925 12:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.925 12:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:45.925 Found net devices under 0000:31:00.1: cvl_0_1 00:18:45.925 12:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.925 12:12:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:45.925 12:12:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:45.925 12:12:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:45.925 12:12:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.925 12:12:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.925 12:12:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.925 12:12:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:45.925 12:12:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.925 12:12:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.925 12:12:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:45.925 12:12:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.925 12:12:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.925 12:12:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:45.925 12:12:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:45.925 12:12:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.925 12:12:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.925 12:12:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.925 12:12:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.925 12:12:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:45.925 12:12:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.925 12:12:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.925 12:12:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.925 12:12:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:45.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:18:45.925 00:18:45.925 --- 10.0.0.2 ping statistics --- 00:18:45.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.925 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:18:45.925 12:12:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:18:45.925 00:18:45.925 --- 10.0.0.1 ping statistics --- 00:18:45.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.925 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:18:45.925 12:12:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.925 12:12:57 -- nvmf/common.sh@410 -- # return 0 00:18:45.925 12:12:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:45.925 12:12:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.925 12:12:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:45.925 12:12:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.925 12:12:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:45.925 12:12:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:45.925 12:12:57 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:45.925 12:12:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:45.925 12:12:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:45.925 12:12:57 -- common/autotest_common.sh@10 -- # set +x 00:18:45.925 12:12:57 -- nvmf/common.sh@469 -- # nvmfpid=1474487 00:18:45.925 12:12:57 -- nvmf/common.sh@470 -- # waitforlisten 1474487 00:18:45.925 12:12:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.925 12:12:57 -- common/autotest_common.sh@819 -- # '[' -z 1474487 ']' 00:18:45.925 12:12:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.925 12:12:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:45.925 12:12:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.925 12:12:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:45.925 12:12:57 -- common/autotest_common.sh@10 -- # set +x 00:18:45.925 [2024-06-11 12:12:57.982255] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:45.925 [2024-06-11 12:12:57.982318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.925 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.925 [2024-06-11 12:12:58.058688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.925 [2024-06-11 12:12:58.103196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:45.925 [2024-06-11 12:12:58.103345] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.925 [2024-06-11 12:12:58.103355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.925 [2024-06-11 12:12:58.103362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.925 [2024-06-11 12:12:58.103399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.925 12:12:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:45.925 12:12:58 -- common/autotest_common.sh@852 -- # return 0 00:18:45.925 12:12:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:45.926 12:12:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:45.926 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 12:12:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.926 12:12:58 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.926 12:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.926 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 [2024-06-11 12:12:58.800743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.926 12:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.926 12:12:58 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.926 12:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.926 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 Malloc0 00:18:45.926 12:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.926 12:12:58 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.926 12:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.926 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 12:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.926 12:12:58 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.926 12:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.926 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 12:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.926 12:12:58 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.926 12:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:45.926 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 [2024-06-11 12:12:58.867593] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.926 12:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:45.926 12:12:58 -- target/queue_depth.sh@30 -- # bdevperf_pid=1474795 00:18:45.926 12:12:58 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.926 12:12:58 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:45.926 12:12:58 -- target/queue_depth.sh@33 -- # waitforlisten 1474795 /var/tmp/bdevperf.sock 00:18:45.926 12:12:58 -- common/autotest_common.sh@819 -- # '[' -z 1474795 ']' 00:18:45.926 12:12:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.926 12:12:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:45.926 12:12:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.926 12:12:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:45.926 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:18:45.926 [2024-06-11 12:12:58.918339] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:45.926 [2024-06-11 12:12:58.918390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474795 ] 00:18:45.926 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.187 [2024-06-11 12:12:58.980531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.187 [2024-06-11 12:12:59.011510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.758 12:12:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:46.758 12:12:59 -- common/autotest_common.sh@852 -- # return 0 00:18:46.758 12:12:59 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:46.758 12:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.758 12:12:59 -- common/autotest_common.sh@10 -- # set +x 00:18:46.758 NVMe0n1 00:18:46.758 12:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.758 12:12:59 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.019 Running I/O for 10 seconds... 00:18:57.021 00:18:57.021 Latency(us) 00:18:57.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.021 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:57.021 Verification LBA range: start 0x0 length 0x4000 00:18:57.021 NVMe0n1 : 10.08 19488.35 76.13 0.00 0.00 52189.41 12997.97 53957.97 00:18:57.021 =================================================================================================================== 00:18:57.021 Total : 19488.35 76.13 0.00 0.00 52189.41 12997.97 53957.97 00:18:57.021 0 00:18:57.021 12:13:09 -- target/queue_depth.sh@39 -- # killprocess 1474795 00:18:57.021 12:13:09 -- common/autotest_common.sh@926 -- # '[' -z 1474795 ']' 00:18:57.021 12:13:09 -- common/autotest_common.sh@930 -- # kill -0 1474795 00:18:57.021 12:13:09 -- common/autotest_common.sh@931 -- # uname 00:18:57.021 12:13:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.021 12:13:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1474795 00:18:57.021 12:13:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:57.021 12:13:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:57.021 12:13:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1474795' 00:18:57.021 killing process with pid 1474795 00:18:57.021 12:13:09 -- common/autotest_common.sh@945 -- # kill 1474795 00:18:57.021 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.021 00:18:57.021 Latency(us) 00:18:57.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.021 =================================================================================================================== 00:18:57.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.021 12:13:09 -- common/autotest_common.sh@950 -- # wait 1474795 00:18:57.282 12:13:10 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:57.282 12:13:10 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:57.282 12:13:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.282 12:13:10 -- nvmf/common.sh@116 -- # sync 00:18:57.282 12:13:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.282 12:13:10 -- nvmf/common.sh@119 -- # set +e 00:18:57.282 12:13:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.282 12:13:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.282 rmmod nvme_tcp 00:18:57.282 rmmod nvme_fabrics 00:18:57.282 rmmod nvme_keyring 00:18:57.282 12:13:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.282 12:13:10 -- nvmf/common.sh@123 -- # set -e 00:18:57.282 12:13:10 -- nvmf/common.sh@124 -- # return 0 00:18:57.282 12:13:10 -- nvmf/common.sh@477 -- # '[' -n 1474487 ']' 00:18:57.282 12:13:10 -- nvmf/common.sh@478 -- # killprocess 1474487 00:18:57.282 12:13:10 -- common/autotest_common.sh@926 -- # '[' -z 1474487 ']' 00:18:57.282 12:13:10 -- common/autotest_common.sh@930 -- # kill -0 1474487 00:18:57.282 12:13:10 -- common/autotest_common.sh@931 -- # uname 00:18:57.282 12:13:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.282 12:13:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1474487 00:18:57.282 12:13:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:57.282 12:13:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:57.282 12:13:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1474487' 00:18:57.282 killing process with pid 1474487 00:18:57.282 12:13:10 -- common/autotest_common.sh@945 -- # kill 1474487 00:18:57.282 12:13:10 -- common/autotest_common.sh@950 -- # wait 1474487 00:18:57.542 12:13:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:57.542 12:13:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:57.542 12:13:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:57.542 12:13:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.542 12:13:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:57.542 12:13:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.542 12:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.542 12:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.454 12:13:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:59.454 00:18:59.454 real 0m21.883s 00:18:59.454 user 0m25.296s 00:18:59.455 sys 0m6.506s 00:18:59.455 12:13:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.455 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:59.455 ************************************ 00:18:59.455 END TEST nvmf_queue_depth 00:18:59.455 ************************************ 00:18:59.455 12:13:12 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:59.455 12:13:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:59.455 12:13:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.455 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:18:59.455 ************************************ 00:18:59.455 START TEST nvmf_multipath 00:18:59.455 ************************************ 00:18:59.455 12:13:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:59.716 * Looking for test storage... 00:18:59.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.716 12:13:12 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.716 12:13:12 -- nvmf/common.sh@7 -- # uname -s 00:18:59.716 12:13:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.716 12:13:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.716 12:13:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.716 12:13:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.716 12:13:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.716 12:13:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.716 12:13:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.716 12:13:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.716 12:13:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.716 12:13:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.716 12:13:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.716 12:13:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.716 12:13:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.716 12:13:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.716 12:13:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.716 12:13:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.716 12:13:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.716 12:13:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.716 12:13:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.716 12:13:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.716 12:13:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.716 12:13:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.716 12:13:12 -- paths/export.sh@5 -- # export PATH 00:18:59.716 12:13:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.716 12:13:12 -- nvmf/common.sh@46 -- # : 0 00:18:59.716 12:13:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:59.716 12:13:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:59.716 12:13:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:59.716 12:13:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.716 12:13:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.716 12:13:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:59.716 12:13:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:59.716 12:13:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:59.716 12:13:12 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.716 12:13:12 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.716 12:13:12 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:59.716 12:13:12 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.716 12:13:12 -- target/multipath.sh@43 -- # nvmftestinit 00:18:59.716 12:13:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:59.716 12:13:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.716 12:13:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:59.716 12:13:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:59.716 12:13:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:59.716 12:13:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.716 12:13:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.716 12:13:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.716 12:13:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:59.716 12:13:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:59.716 12:13:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:59.716 12:13:12 -- common/autotest_common.sh@10 -- # set +x 00:19:07.865 12:13:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:07.865 12:13:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:07.865 12:13:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:07.865 12:13:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:07.865 12:13:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:07.865 12:13:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:07.865 12:13:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:07.865 12:13:19 -- nvmf/common.sh@294 -- # net_devs=() 00:19:07.865 12:13:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:07.865 12:13:19 -- nvmf/common.sh@295 -- # e810=() 00:19:07.865 12:13:19 -- nvmf/common.sh@295 -- # local -ga e810 00:19:07.865 12:13:19 -- nvmf/common.sh@296 -- # x722=() 00:19:07.865 12:13:19 -- nvmf/common.sh@296 -- # local -ga x722 00:19:07.865 12:13:19 -- nvmf/common.sh@297 -- # mlx=() 00:19:07.865 12:13:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:07.865 12:13:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.865 12:13:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:07.865 12:13:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:07.865 12:13:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:07.865 12:13:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:07.865 12:13:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:07.865 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:07.865 12:13:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:07.865 12:13:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:07.865 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:07.865 12:13:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:07.865 12:13:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:07.865 12:13:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.865 12:13:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:07.865 12:13:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.865 12:13:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:07.865 Found net devices under 0000:31:00.0: cvl_0_0 00:19:07.865 12:13:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.865 12:13:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:07.865 12:13:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.865 12:13:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:07.865 12:13:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.865 12:13:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:07.865 Found net devices under 0000:31:00.1: cvl_0_1 00:19:07.865 12:13:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.865 12:13:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:07.865 12:13:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:07.865 12:13:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:07.865 12:13:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.865 12:13:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.865 12:13:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.865 12:13:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:07.865 12:13:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.865 12:13:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.865 12:13:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:07.865 12:13:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.865 12:13:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.865 12:13:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:07.865 12:13:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:07.865 12:13:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.865 12:13:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.865 12:13:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.865 12:13:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.865 12:13:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:07.865 12:13:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.865 12:13:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.865 12:13:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.865 12:13:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:07.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:19:07.865 00:19:07.865 --- 10.0.0.2 ping statistics --- 00:19:07.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.865 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:19:07.865 12:13:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:19:07.865 00:19:07.865 --- 10.0.0.1 ping statistics --- 00:19:07.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.865 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:19:07.865 12:13:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.865 12:13:19 -- nvmf/common.sh@410 -- # return 0 00:19:07.865 12:13:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:07.865 12:13:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.865 12:13:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.865 12:13:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:07.865 12:13:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:07.865 12:13:19 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:07.865 12:13:19 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:07.865 only one NIC for nvmf test 00:19:07.865 12:13:19 -- target/multipath.sh@47 -- # nvmftestfini 00:19:07.865 12:13:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.865 12:13:19 -- nvmf/common.sh@116 -- # sync 00:19:07.865 12:13:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.865 12:13:19 -- nvmf/common.sh@119 -- # set +e 00:19:07.865 12:13:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.865 12:13:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.865 rmmod nvme_tcp 00:19:07.865 rmmod nvme_fabrics 00:19:07.865 rmmod nvme_keyring 00:19:07.865 12:13:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.865 12:13:19 -- nvmf/common.sh@123 -- # set -e 00:19:07.865 12:13:19 -- nvmf/common.sh@124 -- # return 0 00:19:07.865 12:13:19 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:07.865 12:13:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.865 12:13:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.865 12:13:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.865 12:13:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.865 12:13:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.865 12:13:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.865 12:13:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.251 12:13:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.251 12:13:21 -- target/multipath.sh@48 -- # exit 0 00:19:09.251 12:13:21 -- target/multipath.sh@1 -- # nvmftestfini 00:19:09.251 12:13:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:09.251 12:13:21 -- nvmf/common.sh@116 -- # sync 00:19:09.251 12:13:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:09.251 12:13:21 -- nvmf/common.sh@119 -- # set +e 00:19:09.251 12:13:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:09.251 12:13:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:09.251 12:13:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:09.251 12:13:21 -- nvmf/common.sh@123 -- # set -e 00:19:09.251 12:13:21 -- nvmf/common.sh@124 -- # return 0 00:19:09.251 12:13:21 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:09.251 12:13:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:09.251 12:13:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:09.251 12:13:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:09.251 12:13:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.251 12:13:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:09.251 12:13:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.251 12:13:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.251 12:13:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.251 12:13:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.251 00:19:09.251 real 0m9.503s 00:19:09.251 user 0m2.091s 00:19:09.251 sys 0m5.312s 00:19:09.251 12:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.251 12:13:21 -- common/autotest_common.sh@10 -- # set +x 00:19:09.251 ************************************ 00:19:09.251 END TEST nvmf_multipath 00:19:09.251 ************************************ 00:19:09.251 12:13:22 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:09.251 12:13:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:09.251 12:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.251 12:13:22 -- common/autotest_common.sh@10 -- # set +x 00:19:09.251 ************************************ 00:19:09.251 START TEST nvmf_zcopy 00:19:09.251 ************************************ 00:19:09.251 12:13:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:09.251 * Looking for test storage... 00:19:09.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.251 12:13:22 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.251 12:13:22 -- nvmf/common.sh@7 -- # uname -s 00:19:09.251 12:13:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.251 12:13:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.251 12:13:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.251 12:13:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.251 12:13:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.251 12:13:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.251 12:13:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.251 12:13:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.251 12:13:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.251 12:13:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.252 12:13:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.252 12:13:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.252 12:13:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.252 12:13:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.252 12:13:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.252 12:13:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.252 12:13:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.252 12:13:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.252 12:13:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.252 12:13:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.252 12:13:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.252 12:13:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.252 12:13:22 -- paths/export.sh@5 -- # export PATH 00:19:09.252 12:13:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.252 12:13:22 -- nvmf/common.sh@46 -- # : 0 00:19:09.252 12:13:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:09.252 12:13:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:09.252 12:13:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:09.252 12:13:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.252 12:13:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.252 12:13:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:09.252 12:13:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:09.252 12:13:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:09.252 12:13:22 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:09.252 12:13:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:09.252 12:13:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.252 12:13:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:09.252 12:13:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:09.252 12:13:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:09.252 12:13:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.252 12:13:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.252 12:13:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.252 12:13:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:09.252 12:13:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:09.252 12:13:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:09.252 12:13:22 -- common/autotest_common.sh@10 -- # set +x 00:19:17.389 12:13:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:17.389 12:13:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:17.389 12:13:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:17.389 12:13:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:17.389 12:13:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:17.389 12:13:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:17.389 12:13:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:17.389 12:13:29 -- nvmf/common.sh@294 -- # net_devs=() 00:19:17.389 12:13:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:17.389 12:13:29 -- nvmf/common.sh@295 -- # e810=() 00:19:17.389 12:13:29 -- nvmf/common.sh@295 -- # local -ga e810 00:19:17.389 12:13:29 -- nvmf/common.sh@296 -- # x722=() 00:19:17.389 12:13:29 -- nvmf/common.sh@296 -- # local -ga x722 00:19:17.389 12:13:29 -- nvmf/common.sh@297 -- # mlx=() 00:19:17.389 12:13:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:17.389 12:13:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.389 12:13:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:17.389 12:13:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:17.389 12:13:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:17.389 12:13:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:17.389 12:13:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:17.389 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:17.389 12:13:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:17.389 12:13:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:17.389 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:17.389 12:13:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:17.389 12:13:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:17.389 12:13:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.389 12:13:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:17.389 12:13:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.389 12:13:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:17.389 Found net devices under 0000:31:00.0: cvl_0_0 00:19:17.389 12:13:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.389 12:13:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:17.389 12:13:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.389 12:13:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:17.389 12:13:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.389 12:13:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:17.389 Found net devices under 0000:31:00.1: cvl_0_1 00:19:17.389 12:13:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.389 12:13:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:17.389 12:13:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:17.389 12:13:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:17.389 12:13:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:17.389 12:13:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.389 12:13:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.389 12:13:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.389 12:13:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:17.390 12:13:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.390 12:13:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.390 12:13:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:17.390 12:13:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.390 12:13:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.390 12:13:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:17.390 12:13:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:17.390 12:13:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.390 12:13:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.390 12:13:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.390 12:13:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.390 12:13:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:17.390 12:13:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.390 12:13:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.390 12:13:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.390 12:13:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:17.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:19:17.390 00:19:17.390 --- 10.0.0.2 ping statistics --- 00:19:17.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.390 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:19:17.390 12:13:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:19:17.390 00:19:17.390 --- 10.0.0.1 ping statistics --- 00:19:17.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.390 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:19:17.390 12:13:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.390 12:13:29 -- nvmf/common.sh@410 -- # return 0 00:19:17.390 12:13:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:17.390 12:13:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.390 12:13:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:17.390 12:13:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:17.390 12:13:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.390 12:13:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:17.390 12:13:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:17.390 12:13:29 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:17.390 12:13:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:17.390 12:13:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:17.390 12:13:29 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 12:13:29 -- nvmf/common.sh@469 -- # nvmfpid=1485324 00:19:17.390 12:13:29 -- nvmf/common.sh@470 -- # waitforlisten 1485324 00:19:17.390 12:13:29 -- common/autotest_common.sh@819 -- # '[' -z 1485324 ']' 00:19:17.390 12:13:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.390 12:13:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:17.390 12:13:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.390 12:13:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:17.390 12:13:29 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 12:13:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.390 [2024-06-11 12:13:29.503923] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:17.390 [2024-06-11 12:13:29.503985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.390 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.390 [2024-06-11 12:13:29.592326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.390 [2024-06-11 12:13:29.636849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:17.390 [2024-06-11 12:13:29.637005] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.390 [2024-06-11 12:13:29.637015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.390 [2024-06-11 12:13:29.637034] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.390 [2024-06-11 12:13:29.637064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.390 12:13:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:17.390 12:13:30 -- common/autotest_common.sh@852 -- # return 0 00:19:17.390 12:13:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:17.390 12:13:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:17.390 12:13:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 12:13:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.390 12:13:30 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:17.390 12:13:30 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:17.390 12:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.390 12:13:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 [2024-06-11 12:13:30.321920] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.390 12:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.390 12:13:30 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:17.390 12:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.390 12:13:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 12:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.390 12:13:30 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.390 12:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.390 12:13:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 [2024-06-11 12:13:30.338115] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.390 12:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.390 12:13:30 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:17.390 12:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.390 12:13:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 12:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.390 12:13:30 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:17.390 12:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.390 12:13:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 malloc0 00:19:17.390 12:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.390 12:13:30 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:17.390 12:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.390 12:13:30 -- common/autotest_common.sh@10 -- # set +x 00:19:17.390 12:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.390 12:13:30 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:17.390 12:13:30 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:17.390 12:13:30 -- nvmf/common.sh@520 -- # config=() 00:19:17.390 12:13:30 -- nvmf/common.sh@520 -- # local subsystem config 00:19:17.390 12:13:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:17.390 12:13:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:17.390 { 00:19:17.390 "params": { 00:19:17.390 "name": "Nvme$subsystem", 00:19:17.390 "trtype": "$TEST_TRANSPORT", 00:19:17.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.390 "adrfam": "ipv4", 00:19:17.390 "trsvcid": "$NVMF_PORT", 00:19:17.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.390 "hdgst": ${hdgst:-false}, 00:19:17.390 "ddgst": ${ddgst:-false} 00:19:17.390 }, 00:19:17.390 "method": "bdev_nvme_attach_controller" 00:19:17.390 } 00:19:17.390 EOF 00:19:17.390 )") 00:19:17.390 12:13:30 -- nvmf/common.sh@542 -- # cat 00:19:17.390 12:13:30 -- nvmf/common.sh@544 -- # jq . 00:19:17.390 12:13:30 -- nvmf/common.sh@545 -- # IFS=, 00:19:17.390 12:13:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:17.390 "params": { 00:19:17.390 "name": "Nvme1", 00:19:17.390 "trtype": "tcp", 00:19:17.390 "traddr": "10.0.0.2", 00:19:17.390 "adrfam": "ipv4", 00:19:17.390 "trsvcid": "4420", 00:19:17.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.390 "hdgst": false, 00:19:17.390 "ddgst": false 00:19:17.390 }, 00:19:17.390 "method": "bdev_nvme_attach_controller" 00:19:17.390 }' 00:19:17.390 [2024-06-11 12:13:30.422115] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:17.390 [2024-06-11 12:13:30.422178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485660 ] 00:19:17.652 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.652 [2024-06-11 12:13:30.487078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.652 [2024-06-11 12:13:30.524240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.913 Running I/O for 10 seconds... 00:19:27.916 00:19:27.916 Latency(us) 00:19:27.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.916 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:27.916 Verification LBA range: start 0x0 length 0x1000 00:19:27.916 Nvme1n1 : 10.01 11140.61 87.04 0.00 0.00 11460.66 894.29 21517.65 00:19:27.916 =================================================================================================================== 00:19:27.916 Total : 11140.61 87.04 0.00 0.00 11460.66 894.29 21517.65 00:19:27.916 12:13:40 -- target/zcopy.sh@39 -- # perfpid=1487689 00:19:27.916 12:13:40 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:27.916 12:13:40 -- common/autotest_common.sh@10 -- # set +x 00:19:27.916 12:13:40 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:27.916 12:13:40 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:27.916 12:13:40 -- nvmf/common.sh@520 -- # config=() 00:19:27.916 12:13:40 -- nvmf/common.sh@520 -- # local subsystem config 00:19:27.916 12:13:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:27.916 12:13:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:27.916 { 00:19:27.916 "params": { 00:19:27.916 "name": "Nvme$subsystem", 00:19:27.916 "trtype": "$TEST_TRANSPORT", 00:19:27.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.916 "adrfam": "ipv4", 00:19:27.916 "trsvcid": "$NVMF_PORT", 00:19:27.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.916 "hdgst": ${hdgst:-false}, 00:19:27.916 "ddgst": ${ddgst:-false} 00:19:27.916 }, 00:19:27.916 "method": "bdev_nvme_attach_controller" 00:19:27.916 } 00:19:27.916 EOF 00:19:27.916 )") 00:19:27.916 12:13:40 -- nvmf/common.sh@542 -- # cat 00:19:27.916 [2024-06-11 12:13:40.853986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.854012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 12:13:40 -- nvmf/common.sh@544 -- # jq . 00:19:27.917 [2024-06-11 12:13:40.861977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.861986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 12:13:40 -- nvmf/common.sh@545 -- # IFS=, 00:19:27.917 12:13:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:27.917 "params": { 00:19:27.917 "name": "Nvme1", 00:19:27.917 "trtype": "tcp", 00:19:27.917 "traddr": "10.0.0.2", 00:19:27.917 "adrfam": "ipv4", 00:19:27.917 "trsvcid": "4420", 00:19:27.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.917 "hdgst": false, 00:19:27.917 "ddgst": false 00:19:27.917 }, 00:19:27.917 "method": "bdev_nvme_attach_controller" 00:19:27.917 }' 00:19:27.917 [2024-06-11 12:13:40.869995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.870003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.878015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.878026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.886040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.886047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.893886] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:27.917 [2024-06-11 12:13:40.893935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487689 ] 00:19:27.917 [2024-06-11 12:13:40.894062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.894069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.902077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.902084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.910098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.910105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.918119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.918126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.917 [2024-06-11 12:13:40.926139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.926146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.934159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.934165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.942179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.942186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.917 [2024-06-11 12:13:40.950199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.917 [2024-06-11 12:13:40.950206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:40.952880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.178 [2024-06-11 12:13:40.958221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:40.958229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:40.966242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:40.966251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:40.974263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:40.974275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:40.981180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.178 [2024-06-11 12:13:40.982282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:40.982290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:40.990303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:40.990310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:40.998331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:40.998344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.006346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.006355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.014365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.014374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.022383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.022390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.030405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.030414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.038424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.038432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.046450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.046461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.054470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.054481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.062490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.062499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.070512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.070521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.078530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.078539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.086552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.086562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.094571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.094580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.102591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.102599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.110619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.110634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 Running I/O for 5 seconds... 00:19:28.178 [2024-06-11 12:13:41.118631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.118639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.128857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.128873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.137535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.137551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.145714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.145728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.154753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.154768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.163546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.163561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.171906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.171921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.180554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.180568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.189270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.189284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.197752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.197767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.178 [2024-06-11 12:13:41.206740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.178 [2024-06-11 12:13:41.206755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.214845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.214864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.223802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.223816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.231867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.231883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.240701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.240716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.249535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.249549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.258274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.258289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.266958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.266972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.275232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.275246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.283797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.283811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.292075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.292089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.301097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.301112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.309644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.309658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.318629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.318644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.327299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.327313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.335702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.335716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.344265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.344279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.352524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.352537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.361064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.361078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.440 [2024-06-11 12:13:41.369819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.440 [2024-06-11 12:13:41.369834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.378899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.378918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.387622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.387637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.396219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.396234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.404664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.404678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.413168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.413182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.421627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.421642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.429767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.429782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.438683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.438698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.447160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.447174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.455777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.455791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.464524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.464538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.441 [2024-06-11 12:13:41.473487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.441 [2024-06-11 12:13:41.473501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.481512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.481526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.490507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.490521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.499181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.499195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.508457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.508471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.516958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.516973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.525909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.525924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.534652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.534666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.543600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.543618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.551752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.551766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.560650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.560664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.569222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.569238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.577339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.577354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.586288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.586303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.595053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.595068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.603785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.603799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.612459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.612474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.621238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.621253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.629991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.630005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.638647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.638661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.647219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.647234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.656203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.656218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.664471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.664486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.673257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.673272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.682102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.682117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.690954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.690968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.699196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.699211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.708278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.708293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.716892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.716907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.725598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.725613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.702 [2024-06-11 12:13:41.734416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.702 [2024-06-11 12:13:41.734430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.743527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.743542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.751843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.751858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.760818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.760833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.769661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.769676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.778375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.778389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.787214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.787229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.796262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.796276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.805200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.805215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.813808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.813823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.822537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.822551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.964 [2024-06-11 12:13:41.831676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.964 [2024-06-11 12:13:41.831692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.840078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.840093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.848810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.848824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.857285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.857300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.865937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.865952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.874289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.874304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.882708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.882723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.891239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.891253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.899843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.899858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.908390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.908405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.917736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.917751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.925638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.925654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.934906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.934921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.943603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.943618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.952226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.952240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.961251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.961265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.970011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.970030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.978636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.978650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.986707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.986721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.965 [2024-06-11 12:13:41.995382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.965 [2024-06-11 12:13:41.995396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.004025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.004039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.013127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.013142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.021822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.021836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.030590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.030605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.039311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.039326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.048026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.048041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.056292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.056307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.065125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.065140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.073151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.073166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.081616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.081631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.090192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.090207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.098780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.098795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.107223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.107237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.115805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.115820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.123881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.123896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.132875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.132890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.141563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.141578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.150127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.150142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.159057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.159072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.166641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.166655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.175405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.175420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.183604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.183619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.192354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.192369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.201255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.201270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.209854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.209869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.217765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.217779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.226994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.227009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.235477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.235492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.244514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.244530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.225 [2024-06-11 12:13:42.253339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.225 [2024-06-11 12:13:42.253354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.261896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.261911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.270433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.270447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.279103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.279117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.287761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.287775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.296408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.296422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.305395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.305410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.314170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.314185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.322822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.322836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.331394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.331408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.339670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.339684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.348754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.348768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.357252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.357270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.366005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.366023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.374294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.374309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.382877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.382891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.391813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.391827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.400459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.400473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.408943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.408957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.417966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.417980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.425925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.425940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.434575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.434590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.443174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.443189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.451373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.451387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.460362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.460377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.469092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.469107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.477538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.477553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.486327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.486342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.494730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.494744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.503234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.503248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.487 [2024-06-11 12:13:42.511969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.487 [2024-06-11 12:13:42.511983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.521120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.521139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.529206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.529220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.538045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.538059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.547304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.547318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.555560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.555574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.564455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.564470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.573234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.573248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.582027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.582040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.590477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.590491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.599305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.599319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.608069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.608083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.616784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.616798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.625497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.625512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.634260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.634274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.643044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.643059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.651585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.651599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.660287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.660302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.669066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.669079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.677798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.677813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.687138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.687156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.695393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.695408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.703757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.703772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.711964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.711978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.720751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.720765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.729369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.729383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.737866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.737880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.746574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.746588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.755427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.755441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.764036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.764050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.772482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.772496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-06-11 12:13:42.781204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-06-11 12:13:42.781218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.789421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.789436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.797887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.797901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.806404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.806418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.815274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.815289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.824255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.824269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.832939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.832954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.841590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.841604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.850557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.850574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.858723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.858737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.867244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.867259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.875945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.875959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.884525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.884540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.893337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.893351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.902105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.902119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.910949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.910963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.919416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.919430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.927664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.927678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.936566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.936580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.945159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.945172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.953674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.953688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.962305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.962319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.970993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.971008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.979676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.979690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.988305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.988319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:42.997014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:42.997033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:43.005765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:43.005780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:43.014299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:43.014314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:43.022397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:43.022412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:43.030792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:43.030806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.011 [2024-06-11 12:13:43.039087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.011 [2024-06-11 12:13:43.039101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.047911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.047926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.057191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.057204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.065221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.065235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.073996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.074010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.082475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.082489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.091215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.091229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.100057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.100071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.108632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.108647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.117476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.117489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.126110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.126124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.134817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.134831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.143519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.143534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.152039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.152054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.160725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.160740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.169375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.169390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.178042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.178057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.186681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.186695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.195239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.195254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.203869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.303 [2024-06-11 12:13:43.203883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.303 [2024-06-11 12:13:43.212631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.212645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.221289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.221303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.229778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.229792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.238398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.238412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.247197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.247211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.256079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.256093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.264152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.264167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.273358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.273372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.281499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.281513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.290279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.290294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.299022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.299037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.307500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.307515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.304 [2024-06-11 12:13:43.316389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.304 [2024-06-11 12:13:43.316403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.325477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.325492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.333727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.333742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.342076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.342091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.350679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.350693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.359130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.359145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.367680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.367695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.376563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.376578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.385136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.385151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.393667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.393681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.596 [2024-06-11 12:13:43.402260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.596 [2024-06-11 12:13:43.402274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.410414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.410429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.419118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.419132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.427745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.427759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.436188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.436202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.445130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.445145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.453821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.453837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.462510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.462524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.470989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.471004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.479674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.479688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.488145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.488161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.497158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.497172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.505212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.505227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.513957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.513971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.522519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.522533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.531050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.531065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.539570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.539585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.548275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.548290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.557015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.557035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.565080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.565095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.573916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.573931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.582505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.582520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.591172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.591186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.600049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.600063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.608830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.608845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.617593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.617607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.597 [2024-06-11 12:13:43.626310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.597 [2024-06-11 12:13:43.626325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.634621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.634636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.643597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.643611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.651928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.651943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.660489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.660509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.669314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.669328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.677980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.677994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.686614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.686629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.695431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.695445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.704113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.704128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.712737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.712752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.721132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.721147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.729904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.729918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.738497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.738512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.747202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.747217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.754755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.754769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.763754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.763768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.772794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.772809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.781267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.781282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.789971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.789985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.798690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.798704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.807471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.807485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.816322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.816337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.825176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.825195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.833855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.833870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.842734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.842749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.859 [2024-06-11 12:13:43.851294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.859 [2024-06-11 12:13:43.851309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.860 [2024-06-11 12:13:43.860096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.860 [2024-06-11 12:13:43.860111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.860 [2024-06-11 12:13:43.868393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.860 [2024-06-11 12:13:43.868407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.860 [2024-06-11 12:13:43.876618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.860 [2024-06-11 12:13:43.876633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.860 [2024-06-11 12:13:43.885464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.860 [2024-06-11 12:13:43.885479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.894536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.894551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.903281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.903297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.911881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.911895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.920459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.920474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.929175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.929190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.937882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.937897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.946986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.947000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.955633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.955647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.964391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.964406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.972943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.972958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.981386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.981401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.990008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.990030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:43.998557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:43.998572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:44.007001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:44.007015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:44.015646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:44.015661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:44.024383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.121 [2024-06-11 12:13:44.024397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.121 [2024-06-11 12:13:44.033010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.033028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.041901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.041915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.050604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.050618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.059099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.059113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.067802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.067817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.076564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.076579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.085272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.085286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.093572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.093586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.102287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.102301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.111008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.111028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.120077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.120091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.128760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.128774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.137575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.137589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.146340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.146354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.122 [2024-06-11 12:13:44.154975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.122 [2024-06-11 12:13:44.154992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.163593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.163607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.171558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.171572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.180368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.180384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.189108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.189123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.197917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.197933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.206439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.206453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.215211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.215225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.223977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.223992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.232487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.232501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.241271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.241285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.249951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.249965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.258762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.258776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.267503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.267518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.276354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.276368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.383 [2024-06-11 12:13:44.285031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.383 [2024-06-11 12:13:44.285046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.293822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.293836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.302522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.302536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.311122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.311136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.319896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.319910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.328331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.328346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.337379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.337393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.345631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.345646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.354392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.354407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.363434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.363448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.371219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.371233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.380030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.380044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.388802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.388817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.397633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.397648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.406302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.406316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.384 [2024-06-11 12:13:44.415033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.384 [2024-06-11 12:13:44.415047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.423457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.423471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.432589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.432603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.441232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.441245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.449940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.449954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.458377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.458392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.467135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.467149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.475716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.475730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.484468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.484482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.493451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.493465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.501742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.501757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.510348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.510362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.518926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.518941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.527633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.527648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.536425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.536439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.545634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.545649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.553847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.553861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.562649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.562663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.571183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.571198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.579491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.579506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.588446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.588461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.597064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.597078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.604897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.604912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.613803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.613817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.622083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.622097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.631278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.631293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.639148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.639163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.648025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.648040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.656348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.656363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.665042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.665057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.646 [2024-06-11 12:13:44.673414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.646 [2024-06-11 12:13:44.673429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.682166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.682181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.690431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.690446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.698944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.698959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.707472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.707487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.716208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.716222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.724912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.724927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.733692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.733707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.741463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.741478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.750208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.750223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.758845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.758859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.767725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.767739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.781073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.781088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.788722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.788736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.797943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.797957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.806459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.806474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.815057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.815072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.823785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.823799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.832503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.832518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.841143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.841157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.849789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.849803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.858510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.858524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.867127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.867142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.875985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.876000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.884554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.884568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.893337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.893352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.902090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.902104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.910775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.910789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.918978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.918993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.927451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.927467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.908 [2024-06-11 12:13:44.936276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.908 [2024-06-11 12:13:44.936290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:44.944728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:44.944742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:44.953508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:44.953523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:44.961512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:44.961526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:44.970511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:44.970529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:44.979025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:44.979041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:44.987939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:44.987954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:44.996438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:44.996452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.005163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.005178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.013910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.013925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.022854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.022869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.031569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.031584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.040338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.040353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.048866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.048881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.057948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.057963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.066361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.066376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.075088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.075103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.083637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.083652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.092243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.092258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.100967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.100982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.109591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.109606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.118617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.118631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.169 [2024-06-11 12:13:45.127067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.169 [2024-06-11 12:13:45.127082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.135600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.135620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.145653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.145668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.153432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.153447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.162490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.162505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.171162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.171177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.180289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.180303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.187919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.187934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.170 [2024-06-11 12:13:45.197239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.170 [2024-06-11 12:13:45.197255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.430 [2024-06-11 12:13:45.205322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.430 [2024-06-11 12:13:45.205337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.430 [2024-06-11 12:13:45.213761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.430 [2024-06-11 12:13:45.213776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.430 [2024-06-11 12:13:45.223085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.430 [2024-06-11 12:13:45.223100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.430 [2024-06-11 12:13:45.231189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.231204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.240148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.240163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.249135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.249150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.257717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.257731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.266305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.266320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.274781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.274795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.284101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.284115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.292178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.292192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.300610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.300629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.309279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.309294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.318031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.318046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.327209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.327224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.335408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.335423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.344090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.344105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.352741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.352756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.361666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.361681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.369939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.369954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.378517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.378532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.387135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.387150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.395780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.395795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.404440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.404455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.413299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.413313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.422290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.422305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.430845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.430860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.439290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.439305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.448002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.448021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.431 [2024-06-11 12:13:45.456977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.431 [2024-06-11 12:13:45.456993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.465785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.465804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.474235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.474249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.483183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.483198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.491899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.491913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.500713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.500727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.509555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.509570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.518393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.518408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.527075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.527090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.535494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.535509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.544491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.544506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.553459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.553474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.562057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.562073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.570559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.570574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.579183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.579198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.587875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.587890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.596503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.596518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.605558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.605573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.614123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.614138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.622718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.622733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.631233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.631248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.639748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.639763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.647978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.692 [2024-06-11 12:13:45.647993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.692 [2024-06-11 12:13:45.656563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.656578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.693 [2024-06-11 12:13:45.664991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.665006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.693 [2024-06-11 12:13:45.673770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.673785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.693 [2024-06-11 12:13:45.682574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.682589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.693 [2024-06-11 12:13:45.691293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.691309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.693 [2024-06-11 12:13:45.699953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.699969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.693 [2024-06-11 12:13:45.708848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.708862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.693 [2024-06-11 12:13:45.717735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.693 [2024-06-11 12:13:45.717749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.953 [2024-06-11 12:13:45.726379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.953 [2024-06-11 12:13:45.726394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.953 [2024-06-11 12:13:45.735227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.953 [2024-06-11 12:13:45.735242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.743761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.743775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.752489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.752503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.761041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.761056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.770139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.770152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.778001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.778015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.786905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.786919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.794919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.794934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.803530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.803544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.811978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.811992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.820500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.820514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.829079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.829093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.837406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.837420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.846384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.846398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.855104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.855117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.863734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.863748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.872455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.872468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.881672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.881686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.889886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.889900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.898714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.898728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.907434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.907449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.916121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.916135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.924947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.924962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.933495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.933509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.942338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.942353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.950875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.950889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.959823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.959837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.968515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.968529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.977301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.977316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.954 [2024-06-11 12:13:45.986138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.954 [2024-06-11 12:13:45.986152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:45.994548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:45.994563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.003148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.003162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.011795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.011809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.020395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.020410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.029358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.029372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.038083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.038098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.046984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.046999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.055565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.055580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.064366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.064381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.073050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.073064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.081599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.081613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.090173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.090186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.098842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.098856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.107518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.107533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.116257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.116271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.124579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.124594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.133325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.133339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 00:19:33.215 Latency(us) 00:19:33.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.215 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:33.215 Nvme1n1 : 5.01 20246.61 158.18 0.00 0.00 6315.73 2471.25 16384.00 00:19:33.215 =================================================================================================================== 00:19:33.215 Total : 20246.61 158.18 0.00 0.00 6315.73 2471.25 16384.00 00:19:33.215 [2024-06-11 12:13:46.139069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.139081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.215 [2024-06-11 12:13:46.147088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.215 [2024-06-11 12:13:46.147099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.155113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.155124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.163135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.163146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.171150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.171160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.179173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.179182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.187191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.187199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.195212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.195220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.203232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.203240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.211254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.211262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.219278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.219287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.227299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.227309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.235320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.235330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.216 [2024-06-11 12:13:46.243338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.216 [2024-06-11 12:13:46.243351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.476 [2024-06-11 12:13:46.251358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:33.476 [2024-06-11 12:13:46.251367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:33.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1487689) - No such process 00:19:33.476 12:13:46 -- target/zcopy.sh@49 -- # wait 1487689 00:19:33.476 12:13:46 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.476 12:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:33.476 12:13:46 -- common/autotest_common.sh@10 -- # set +x 00:19:33.476 12:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:33.476 12:13:46 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:33.476 12:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:33.476 12:13:46 -- common/autotest_common.sh@10 -- # set +x 00:19:33.476 delay0 00:19:33.476 12:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:33.476 12:13:46 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:33.476 12:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:33.476 12:13:46 -- common/autotest_common.sh@10 -- # set +x 00:19:33.476 12:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:33.476 12:13:46 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:33.476 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.476 [2024-06-11 12:13:46.386422] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:41.609 Initializing NVMe Controllers 00:19:41.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:41.609 Initialization complete. Launching workers. 00:19:41.609 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 271, failed: 22128 00:19:41.609 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22306, failed to submit 93 00:19:41.609 success 22202, unsuccess 104, failed 0 00:19:41.609 12:13:53 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:41.609 12:13:53 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:41.609 12:13:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.609 12:13:53 -- nvmf/common.sh@116 -- # sync 00:19:41.609 12:13:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:41.609 12:13:53 -- nvmf/common.sh@119 -- # set +e 00:19:41.609 12:13:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.609 12:13:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:41.609 rmmod nvme_tcp 00:19:41.609 rmmod nvme_fabrics 00:19:41.609 rmmod nvme_keyring 00:19:41.609 12:13:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.609 12:13:53 -- nvmf/common.sh@123 -- # set -e 00:19:41.609 12:13:53 -- nvmf/common.sh@124 -- # return 0 00:19:41.609 12:13:53 -- nvmf/common.sh@477 -- # '[' -n 1485324 ']' 00:19:41.609 12:13:53 -- nvmf/common.sh@478 -- # killprocess 1485324 00:19:41.609 12:13:53 -- common/autotest_common.sh@926 -- # '[' -z 1485324 ']' 00:19:41.609 12:13:53 -- common/autotest_common.sh@930 -- # kill -0 1485324 00:19:41.609 12:13:53 -- common/autotest_common.sh@931 -- # uname 00:19:41.609 12:13:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:41.609 12:13:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1485324 00:19:41.609 12:13:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:41.609 12:13:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:41.609 12:13:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1485324' 00:19:41.609 killing process with pid 1485324 00:19:41.609 12:13:53 -- common/autotest_common.sh@945 -- # kill 1485324 00:19:41.609 12:13:53 -- common/autotest_common.sh@950 -- # wait 1485324 00:19:41.609 12:13:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.609 12:13:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:41.609 12:13:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:41.609 12:13:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.609 12:13:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:41.609 12:13:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.609 12:13:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.609 12:13:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.550 12:13:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:42.550 00:19:42.550 real 0m33.515s 00:19:42.550 user 0m44.960s 00:19:42.550 sys 0m10.942s 00:19:42.550 12:13:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.550 12:13:55 -- common/autotest_common.sh@10 -- # set +x 00:19:42.550 ************************************ 00:19:42.551 END TEST nvmf_zcopy 00:19:42.551 ************************************ 00:19:42.811 12:13:55 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:42.811 12:13:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:42.811 12:13:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.811 12:13:55 -- common/autotest_common.sh@10 -- # set +x 00:19:42.811 ************************************ 00:19:42.811 START TEST nvmf_nmic 00:19:42.811 ************************************ 00:19:42.811 12:13:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:42.811 * Looking for test storage... 00:19:42.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:42.811 12:13:55 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.811 12:13:55 -- nvmf/common.sh@7 -- # uname -s 00:19:42.811 12:13:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.811 12:13:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.811 12:13:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.811 12:13:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.812 12:13:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.812 12:13:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.812 12:13:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.812 12:13:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.812 12:13:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.812 12:13:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.812 12:13:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.812 12:13:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.812 12:13:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.812 12:13:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.812 12:13:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.812 12:13:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.812 12:13:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.812 12:13:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.812 12:13:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.812 12:13:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.812 12:13:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.812 12:13:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.812 12:13:55 -- paths/export.sh@5 -- # export PATH 00:19:42.812 12:13:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.812 12:13:55 -- nvmf/common.sh@46 -- # : 0 00:19:42.812 12:13:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.812 12:13:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.812 12:13:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.812 12:13:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.812 12:13:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.812 12:13:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.812 12:13:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.812 12:13:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.812 12:13:55 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.812 12:13:55 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.812 12:13:55 -- target/nmic.sh@14 -- # nvmftestinit 00:19:42.812 12:13:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:42.812 12:13:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.812 12:13:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.812 12:13:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.812 12:13:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.812 12:13:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.812 12:13:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.812 12:13:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.812 12:13:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:42.812 12:13:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:42.812 12:13:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:42.812 12:13:55 -- common/autotest_common.sh@10 -- # set +x 00:19:50.953 12:14:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:50.953 12:14:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:50.953 12:14:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:50.953 12:14:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:50.953 12:14:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:50.953 12:14:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:50.953 12:14:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:50.953 12:14:02 -- nvmf/common.sh@294 -- # net_devs=() 00:19:50.953 12:14:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:50.953 12:14:02 -- nvmf/common.sh@295 -- # e810=() 00:19:50.953 12:14:02 -- nvmf/common.sh@295 -- # local -ga e810 00:19:50.953 12:14:02 -- nvmf/common.sh@296 -- # x722=() 00:19:50.953 12:14:02 -- nvmf/common.sh@296 -- # local -ga x722 00:19:50.953 12:14:02 -- nvmf/common.sh@297 -- # mlx=() 00:19:50.953 12:14:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:50.953 12:14:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.953 12:14:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:50.953 12:14:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:50.953 12:14:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:50.953 12:14:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.953 12:14:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:50.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:50.953 12:14:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.953 12:14:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.953 12:14:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:50.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:50.954 12:14:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:50.954 12:14:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.954 12:14:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.954 12:14:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.954 12:14:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.954 12:14:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:50.954 Found net devices under 0000:31:00.0: cvl_0_0 00:19:50.954 12:14:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.954 12:14:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.954 12:14:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.954 12:14:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.954 12:14:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.954 12:14:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:50.954 Found net devices under 0000:31:00.1: cvl_0_1 00:19:50.954 12:14:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.954 12:14:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:50.954 12:14:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:50.954 12:14:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:50.954 12:14:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.954 12:14:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.954 12:14:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.954 12:14:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:50.954 12:14:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.954 12:14:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.954 12:14:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:50.954 12:14:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.954 12:14:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.954 12:14:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:50.954 12:14:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:50.954 12:14:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.954 12:14:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.954 12:14:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.954 12:14:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.954 12:14:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:50.954 12:14:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.954 12:14:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.954 12:14:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.954 12:14:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:50.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:19:50.954 00:19:50.954 --- 10.0.0.2 ping statistics --- 00:19:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.954 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:19:50.954 12:14:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:19:50.954 00:19:50.954 --- 10.0.0.1 ping statistics --- 00:19:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.954 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:19:50.954 12:14:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.954 12:14:02 -- nvmf/common.sh@410 -- # return 0 00:19:50.954 12:14:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:50.954 12:14:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.954 12:14:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.954 12:14:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.954 12:14:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.954 12:14:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.954 12:14:02 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:50.954 12:14:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:50.954 12:14:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:50.954 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 12:14:02 -- nvmf/common.sh@469 -- # nvmfpid=1494357 00:19:50.954 12:14:02 -- nvmf/common.sh@470 -- # waitforlisten 1494357 00:19:50.954 12:14:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.954 12:14:02 -- common/autotest_common.sh@819 -- # '[' -z 1494357 ']' 00:19:50.954 12:14:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.954 12:14:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.954 12:14:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.954 12:14:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.954 12:14:02 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 [2024-06-11 12:14:03.003453] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:50.954 [2024-06-11 12:14:03.003519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.954 [2024-06-11 12:14:03.076479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.954 [2024-06-11 12:14:03.115597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:50.954 [2024-06-11 12:14:03.115749] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.954 [2024-06-11 12:14:03.115761] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.954 [2024-06-11 12:14:03.115770] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.954 [2024-06-11 12:14:03.115922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.954 [2024-06-11 12:14:03.116114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.954 [2024-06-11 12:14:03.116176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.954 [2024-06-11 12:14:03.116176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.954 12:14:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.954 12:14:03 -- common/autotest_common.sh@852 -- # return 0 00:19:50.954 12:14:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:50.954 12:14:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 12:14:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.954 12:14:03 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 [2024-06-11 12:14:03.825360] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.954 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.954 12:14:03 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 Malloc0 00:19:50.954 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.954 12:14:03 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.954 12:14:03 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.954 12:14:03 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 [2024-06-11 12:14:03.884626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.954 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.954 12:14:03 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:50.954 test case1: single bdev can't be used in multiple subsystems 00:19:50.954 12:14:03 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.954 12:14:03 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.954 12:14:03 -- target/nmic.sh@28 -- # nmic_status=0 00:19:50.954 12:14:03 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:50.954 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.954 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 [2024-06-11 12:14:03.920591] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:50.954 [2024-06-11 12:14:03.920609] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:50.954 [2024-06-11 12:14:03.920617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:50.954 request: 00:19:50.954 { 00:19:50.954 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.954 "namespace": { 00:19:50.954 "bdev_name": "Malloc0" 00:19:50.954 }, 00:19:50.955 "method": "nvmf_subsystem_add_ns", 00:19:50.955 "req_id": 1 00:19:50.955 } 00:19:50.955 Got JSON-RPC error response 00:19:50.955 response: 00:19:50.955 { 00:19:50.955 "code": -32602, 00:19:50.955 "message": "Invalid parameters" 00:19:50.955 } 00:19:50.955 12:14:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:50.955 12:14:03 -- target/nmic.sh@29 -- # nmic_status=1 00:19:50.955 12:14:03 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:50.955 12:14:03 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:50.955 Adding namespace failed - expected result. 00:19:50.955 12:14:03 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:50.955 test case2: host connect to nvmf target in multiple paths 00:19:50.955 12:14:03 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:50.955 12:14:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.955 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:19:50.955 [2024-06-11 12:14:03.932726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:50.955 12:14:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.955 12:14:03 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:52.864 12:14:05 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:54.249 12:14:06 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:54.249 12:14:06 -- common/autotest_common.sh@1177 -- # local i=0 00:19:54.249 12:14:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:54.249 12:14:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:54.249 12:14:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:56.200 12:14:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:56.200 12:14:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:56.200 12:14:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:56.200 12:14:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:56.200 12:14:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:56.200 12:14:08 -- common/autotest_common.sh@1187 -- # return 0 00:19:56.201 12:14:08 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:56.201 [global] 00:19:56.201 thread=1 00:19:56.201 invalidate=1 00:19:56.201 rw=write 00:19:56.201 time_based=1 00:19:56.201 runtime=1 00:19:56.201 ioengine=libaio 00:19:56.201 direct=1 00:19:56.201 bs=4096 00:19:56.201 iodepth=1 00:19:56.201 norandommap=0 00:19:56.201 numjobs=1 00:19:56.201 00:19:56.201 verify_dump=1 00:19:56.201 verify_backlog=512 00:19:56.201 verify_state_save=0 00:19:56.201 do_verify=1 00:19:56.201 verify=crc32c-intel 00:19:56.201 [job0] 00:19:56.201 filename=/dev/nvme0n1 00:19:56.201 Could not set queue depth (nvme0n1) 00:19:56.465 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:56.465 fio-3.35 00:19:56.465 Starting 1 thread 00:19:57.849 00:19:57.849 job0: (groupid=0, jobs=1): err= 0: pid=1495700: Tue Jun 11 12:14:10 2024 00:19:57.849 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:19:57.849 slat (nsec): min=10281, max=26741, avg=25252.72, stdev=3748.68 00:19:57.849 clat (usec): min=40839, max=42125, avg=41569.33, stdev=479.00 00:19:57.849 lat (usec): min=40849, max=42152, avg=41594.59, stdev=480.54 00:19:57.849 clat percentiles (usec): 00:19:57.849 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:57.849 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:19:57.849 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:57.849 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:57.849 | 99.99th=[42206] 00:19:57.849 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:19:57.849 slat (usec): min=9, max=26836, avg=81.10, stdev=1184.80 00:19:57.849 clat (usec): min=245, max=757, avg=477.00, stdev=94.72 00:19:57.849 lat (usec): min=256, max=27474, avg=558.10, stdev=1195.94 00:19:57.849 clat percentiles (usec): 00:19:57.849 | 1.00th=[ 269], 5.00th=[ 330], 10.00th=[ 355], 20.00th=[ 420], 00:19:57.849 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 486], 00:19:57.849 | 70.00th=[ 515], 80.00th=[ 545], 90.00th=[ 611], 95.00th=[ 660], 00:19:57.849 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 758], 99.95th=[ 758], 00:19:57.849 | 99.99th=[ 758] 00:19:57.849 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:57.849 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:57.849 lat (usec) : 250=0.38%, 500=64.53%, 750=31.51%, 1000=0.19% 00:19:57.849 lat (msec) : 50=3.40% 00:19:57.849 cpu : usr=1.15%, sys=1.15%, ctx=535, majf=0, minf=1 00:19:57.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.849 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.849 00:19:57.849 Run status group 0 (all jobs): 00:19:57.849 READ: bw=69.2KiB/s (70.9kB/s), 69.2KiB/s-69.2KiB/s (70.9kB/s-70.9kB/s), io=72.0KiB (73.7kB), run=1040-1040msec 00:19:57.849 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:19:57.849 00:19:57.849 Disk stats (read/write): 00:19:57.849 nvme0n1: ios=39/512, merge=0/0, ticks=1554/222, in_queue=1776, util=98.50% 00:19:57.849 12:14:10 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:57.849 12:14:10 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.849 12:14:10 -- common/autotest_common.sh@1198 -- # local i=0 00:19:57.849 12:14:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:57.849 12:14:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.849 12:14:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:57.849 12:14:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.849 12:14:10 -- common/autotest_common.sh@1210 -- # return 0 00:19:57.849 12:14:10 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:57.849 12:14:10 -- target/nmic.sh@53 -- # nvmftestfini 00:19:57.849 12:14:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:57.849 12:14:10 -- nvmf/common.sh@116 -- # sync 00:19:57.849 12:14:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:57.849 12:14:10 -- nvmf/common.sh@119 -- # set +e 00:19:57.849 12:14:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:57.849 12:14:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:57.849 rmmod nvme_tcp 00:19:57.849 rmmod nvme_fabrics 00:19:57.849 rmmod nvme_keyring 00:19:57.849 12:14:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:57.849 12:14:10 -- nvmf/common.sh@123 -- # set -e 00:19:57.849 12:14:10 -- nvmf/common.sh@124 -- # return 0 00:19:57.849 12:14:10 -- nvmf/common.sh@477 -- # '[' -n 1494357 ']' 00:19:57.849 12:14:10 -- nvmf/common.sh@478 -- # killprocess 1494357 00:19:57.849 12:14:10 -- common/autotest_common.sh@926 -- # '[' -z 1494357 ']' 00:19:57.849 12:14:10 -- common/autotest_common.sh@930 -- # kill -0 1494357 00:19:57.849 12:14:10 -- common/autotest_common.sh@931 -- # uname 00:19:57.849 12:14:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:57.849 12:14:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1494357 00:19:57.849 12:14:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:57.849 12:14:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:57.849 12:14:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1494357' 00:19:57.849 killing process with pid 1494357 00:19:57.849 12:14:10 -- common/autotest_common.sh@945 -- # kill 1494357 00:19:57.849 12:14:10 -- common/autotest_common.sh@950 -- # wait 1494357 00:19:58.110 12:14:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.110 12:14:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.110 12:14:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.110 12:14:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.110 12:14:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.110 12:14:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.110 12:14:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.110 12:14:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.024 12:14:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:00.024 00:20:00.024 real 0m17.396s 00:20:00.025 user 0m44.278s 00:20:00.025 sys 0m6.016s 00:20:00.025 12:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.025 12:14:12 -- common/autotest_common.sh@10 -- # set +x 00:20:00.025 ************************************ 00:20:00.025 END TEST nvmf_nmic 00:20:00.025 ************************************ 00:20:00.025 12:14:13 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:00.025 12:14:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:00.025 12:14:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:00.025 12:14:13 -- common/autotest_common.sh@10 -- # set +x 00:20:00.025 ************************************ 00:20:00.025 START TEST nvmf_fio_target 00:20:00.025 ************************************ 00:20:00.025 12:14:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:00.287 * Looking for test storage... 00:20:00.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.287 12:14:13 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.287 12:14:13 -- nvmf/common.sh@7 -- # uname -s 00:20:00.287 12:14:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.287 12:14:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.287 12:14:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.287 12:14:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.287 12:14:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.287 12:14:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.287 12:14:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.287 12:14:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.287 12:14:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.287 12:14:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.287 12:14:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.287 12:14:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.287 12:14:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.287 12:14:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.287 12:14:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.287 12:14:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.287 12:14:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.287 12:14:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.287 12:14:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.287 12:14:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.287 12:14:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.287 12:14:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.287 12:14:13 -- paths/export.sh@5 -- # export PATH 00:20:00.287 12:14:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.287 12:14:13 -- nvmf/common.sh@46 -- # : 0 00:20:00.287 12:14:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:00.287 12:14:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:00.287 12:14:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:00.287 12:14:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.287 12:14:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.287 12:14:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:00.287 12:14:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:00.287 12:14:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:00.287 12:14:13 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.287 12:14:13 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.287 12:14:13 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:00.287 12:14:13 -- target/fio.sh@16 -- # nvmftestinit 00:20:00.287 12:14:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:00.287 12:14:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.287 12:14:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:00.287 12:14:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:00.287 12:14:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:00.287 12:14:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.287 12:14:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.287 12:14:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.287 12:14:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:00.287 12:14:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:00.287 12:14:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:00.287 12:14:13 -- common/autotest_common.sh@10 -- # set +x 00:20:08.434 12:14:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:08.434 12:14:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:08.434 12:14:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:08.434 12:14:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:08.434 12:14:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:08.434 12:14:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:08.434 12:14:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:08.434 12:14:19 -- nvmf/common.sh@294 -- # net_devs=() 00:20:08.434 12:14:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:08.434 12:14:19 -- nvmf/common.sh@295 -- # e810=() 00:20:08.434 12:14:19 -- nvmf/common.sh@295 -- # local -ga e810 00:20:08.434 12:14:19 -- nvmf/common.sh@296 -- # x722=() 00:20:08.434 12:14:19 -- nvmf/common.sh@296 -- # local -ga x722 00:20:08.434 12:14:19 -- nvmf/common.sh@297 -- # mlx=() 00:20:08.434 12:14:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:08.435 12:14:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.435 12:14:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:08.435 12:14:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:08.435 12:14:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:08.435 12:14:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:08.435 12:14:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:08.435 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:08.435 12:14:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:08.435 12:14:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:08.435 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:08.435 12:14:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:08.435 12:14:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:08.435 12:14:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.435 12:14:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:08.435 12:14:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.435 12:14:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:08.435 Found net devices under 0000:31:00.0: cvl_0_0 00:20:08.435 12:14:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.435 12:14:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:08.435 12:14:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.435 12:14:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:08.435 12:14:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.435 12:14:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:08.435 Found net devices under 0000:31:00.1: cvl_0_1 00:20:08.435 12:14:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.435 12:14:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:08.435 12:14:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:08.435 12:14:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:08.435 12:14:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:08.435 12:14:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.435 12:14:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.435 12:14:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.435 12:14:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:08.435 12:14:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.435 12:14:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.435 12:14:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:08.435 12:14:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.435 12:14:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.435 12:14:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:08.435 12:14:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:08.435 12:14:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.435 12:14:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.435 12:14:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.435 12:14:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.435 12:14:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:08.435 12:14:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.435 12:14:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.435 12:14:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.435 12:14:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:08.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:20:08.435 00:20:08.435 --- 10.0.0.2 ping statistics --- 00:20:08.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.435 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:20:08.435 12:14:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:20:08.436 00:20:08.436 --- 10.0.0.1 ping statistics --- 00:20:08.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.436 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:20:08.436 12:14:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.436 12:14:20 -- nvmf/common.sh@410 -- # return 0 00:20:08.436 12:14:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:08.436 12:14:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.436 12:14:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:08.436 12:14:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:08.436 12:14:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.436 12:14:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:08.436 12:14:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:08.436 12:14:20 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:08.436 12:14:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:08.436 12:14:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:08.436 12:14:20 -- common/autotest_common.sh@10 -- # set +x 00:20:08.436 12:14:20 -- nvmf/common.sh@469 -- # nvmfpid=1500247 00:20:08.436 12:14:20 -- nvmf/common.sh@470 -- # waitforlisten 1500247 00:20:08.436 12:14:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:08.436 12:14:20 -- common/autotest_common.sh@819 -- # '[' -z 1500247 ']' 00:20:08.436 12:14:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.436 12:14:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:08.436 12:14:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.436 12:14:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:08.436 12:14:20 -- common/autotest_common.sh@10 -- # set +x 00:20:08.436 [2024-06-11 12:14:20.407028] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:08.436 [2024-06-11 12:14:20.407091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.436 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.436 [2024-06-11 12:14:20.481537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.436 [2024-06-11 12:14:20.518753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:08.436 [2024-06-11 12:14:20.518915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.436 [2024-06-11 12:14:20.518927] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.436 [2024-06-11 12:14:20.518937] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.436 [2024-06-11 12:14:20.519077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.436 [2024-06-11 12:14:20.519317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.436 [2024-06-11 12:14:20.519318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.436 [2024-06-11 12:14:20.519127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.436 12:14:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:08.436 12:14:21 -- common/autotest_common.sh@852 -- # return 0 00:20:08.436 12:14:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:08.436 12:14:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:08.436 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:20:08.436 12:14:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.436 12:14:21 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:08.436 [2024-06-11 12:14:21.343640] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.436 12:14:21 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:08.698 12:14:21 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:08.698 12:14:21 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:08.698 12:14:21 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:08.698 12:14:21 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:08.959 12:14:21 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:08.959 12:14:21 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:09.219 12:14:22 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:09.220 12:14:22 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:09.220 12:14:22 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:09.480 12:14:22 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:09.480 12:14:22 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:09.741 12:14:22 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:09.741 12:14:22 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:09.741 12:14:22 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:09.741 12:14:22 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:10.002 12:14:22 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:10.262 12:14:23 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:10.262 12:14:23 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.262 12:14:23 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:10.262 12:14:23 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:10.522 12:14:23 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.522 [2024-06-11 12:14:23.545167] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.783 12:14:23 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:10.783 12:14:23 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:11.045 12:14:23 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:12.436 12:14:25 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:12.436 12:14:25 -- common/autotest_common.sh@1177 -- # local i=0 00:20:12.436 12:14:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:12.436 12:14:25 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:20:12.436 12:14:25 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:20:12.436 12:14:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:14.409 12:14:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:14.409 12:14:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:14.409 12:14:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:14.409 12:14:27 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:20:14.409 12:14:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:14.409 12:14:27 -- common/autotest_common.sh@1187 -- # return 0 00:20:14.409 12:14:27 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:14.409 [global] 00:20:14.409 thread=1 00:20:14.409 invalidate=1 00:20:14.409 rw=write 00:20:14.409 time_based=1 00:20:14.409 runtime=1 00:20:14.409 ioengine=libaio 00:20:14.409 direct=1 00:20:14.409 bs=4096 00:20:14.409 iodepth=1 00:20:14.409 norandommap=0 00:20:14.409 numjobs=1 00:20:14.409 00:20:14.409 verify_dump=1 00:20:14.409 verify_backlog=512 00:20:14.409 verify_state_save=0 00:20:14.409 do_verify=1 00:20:14.409 verify=crc32c-intel 00:20:14.409 [job0] 00:20:14.409 filename=/dev/nvme0n1 00:20:14.409 [job1] 00:20:14.409 filename=/dev/nvme0n2 00:20:14.409 [job2] 00:20:14.409 filename=/dev/nvme0n3 00:20:14.409 [job3] 00:20:14.409 filename=/dev/nvme0n4 00:20:14.676 Could not set queue depth (nvme0n1) 00:20:14.676 Could not set queue depth (nvme0n2) 00:20:14.676 Could not set queue depth (nvme0n3) 00:20:14.676 Could not set queue depth (nvme0n4) 00:20:14.937 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.937 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.937 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.937 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.937 fio-3.35 00:20:14.937 Starting 4 threads 00:20:16.348 00:20:16.348 job0: (groupid=0, jobs=1): err= 0: pid=1502047: Tue Jun 11 12:14:29 2024 00:20:16.348 read: IOPS=505, BW=2024KiB/s (2072kB/s)(2060KiB/1018msec) 00:20:16.348 slat (nsec): min=6968, max=56368, avg=22916.70, stdev=7897.52 00:20:16.348 clat (usec): min=416, max=41958, avg=995.30, stdev=3105.95 00:20:16.348 lat (usec): min=442, max=41984, avg=1018.21, stdev=3106.23 00:20:16.348 clat percentiles (usec): 00:20:16.348 | 1.00th=[ 498], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 709], 00:20:16.348 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 783], 00:20:16.348 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:20:16.348 | 99.00th=[ 938], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:20:16.348 | 99.99th=[42206] 00:20:16.348 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:20:16.348 slat (nsec): min=6059, max=52034, avg=26830.79, stdev=11099.75 00:20:16.348 clat (usec): min=174, max=986, avg=444.33, stdev=112.83 00:20:16.348 lat (usec): min=184, max=1020, avg=471.16, stdev=118.03 00:20:16.348 clat percentiles (usec): 00:20:16.348 | 1.00th=[ 253], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 351], 00:20:16.348 | 30.00th=[ 388], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 469], 00:20:16.348 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[ 652], 00:20:16.348 | 99.00th=[ 824], 99.50th=[ 889], 99.90th=[ 971], 99.95th=[ 988], 00:20:16.348 | 99.99th=[ 988] 00:20:16.348 bw ( KiB/s): min= 4096, max= 4096, per=34.27%, avg=4096.00, stdev= 0.00, samples=2 00:20:16.348 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:20:16.348 lat (usec) : 250=0.58%, 500=51.27%, 750=24.82%, 1000=23.13% 00:20:16.348 lat (msec) : 50=0.19% 00:20:16.348 cpu : usr=2.16%, sys=3.93%, ctx=1542, majf=0, minf=1 00:20:16.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.348 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.348 job1: (groupid=0, jobs=1): err= 0: pid=1502048: Tue Jun 11 12:14:29 2024 00:20:16.348 read: IOPS=474, BW=1899KiB/s (1944kB/s)(1952KiB/1028msec) 00:20:16.348 slat (nsec): min=4423, max=45314, avg=25674.49, stdev=6516.79 00:20:16.348 clat (usec): min=410, max=42188, avg=1403.15, stdev=4128.62 00:20:16.348 lat (usec): min=420, max=42193, avg=1428.82, stdev=4127.06 00:20:16.348 clat percentiles (usec): 00:20:16.348 | 1.00th=[ 570], 5.00th=[ 824], 10.00th=[ 873], 20.00th=[ 930], 00:20:16.348 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:20:16.348 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:20:16.348 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:16.348 | 99.99th=[42206] 00:20:16.348 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:20:16.348 slat (nsec): min=9346, max=66489, avg=31200.73, stdev=10123.29 00:20:16.348 clat (usec): min=247, max=943, avg=596.50, stdev=116.34 00:20:16.348 lat (usec): min=258, max=979, avg=627.70, stdev=121.06 00:20:16.348 clat percentiles (usec): 00:20:16.348 | 1.00th=[ 310], 5.00th=[ 379], 10.00th=[ 445], 20.00th=[ 494], 00:20:16.348 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:20:16.348 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:20:16.348 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 947], 99.95th=[ 947], 00:20:16.348 | 99.99th=[ 947] 00:20:16.348 bw ( KiB/s): min= 4096, max= 4096, per=34.27%, avg=4096.00, stdev= 0.00, samples=1 00:20:16.348 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:16.348 lat (usec) : 250=0.10%, 500=10.70%, 750=38.60%, 1000=24.90% 00:20:16.348 lat (msec) : 2=25.20%, 50=0.50% 00:20:16.348 cpu : usr=2.63%, sys=3.12%, ctx=1001, majf=0, minf=1 00:20:16.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.349 issued rwts: total=488,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.349 job2: (groupid=0, jobs=1): err= 0: pid=1502049: Tue Jun 11 12:14:29 2024 00:20:16.349 read: IOPS=67, BW=268KiB/s (275kB/s)(276KiB/1028msec) 00:20:16.349 slat (nsec): min=4404, max=48732, avg=14279.20, stdev=11527.14 00:20:16.349 clat (usec): min=393, max=42890, avg=10416.45, stdev=17516.23 00:20:16.349 lat (usec): min=400, max=42917, avg=10430.73, stdev=17523.64 00:20:16.349 clat percentiles (usec): 00:20:16.349 | 1.00th=[ 396], 5.00th=[ 570], 10.00th=[ 627], 20.00th=[ 676], 00:20:16.349 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 848], 60.00th=[ 955], 00:20:16.349 | 70.00th=[ 1188], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:20:16.349 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:16.349 | 99.99th=[42730] 00:20:16.349 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:20:16.349 slat (nsec): min=9362, max=54110, avg=30959.66, stdev=9472.46 00:20:16.349 clat (usec): min=192, max=1400, avg=562.59, stdev=151.14 00:20:16.349 lat (usec): min=227, max=1436, avg=593.55, stdev=155.58 00:20:16.349 clat percentiles (usec): 00:20:16.349 | 1.00th=[ 265], 5.00th=[ 322], 10.00th=[ 375], 20.00th=[ 433], 00:20:16.349 | 30.00th=[ 474], 40.00th=[ 510], 50.00th=[ 553], 60.00th=[ 611], 00:20:16.349 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 791], 00:20:16.349 | 99.00th=[ 889], 99.50th=[ 1012], 99.90th=[ 1401], 99.95th=[ 1401], 00:20:16.349 | 99.99th=[ 1401] 00:20:16.349 bw ( KiB/s): min= 4096, max= 4096, per=34.27%, avg=4096.00, stdev= 0.00, samples=1 00:20:16.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:16.349 lat (usec) : 250=0.34%, 500=33.22%, 750=49.05%, 1000=12.56% 00:20:16.349 lat (msec) : 2=1.89%, 4=0.17%, 50=2.75% 00:20:16.349 cpu : usr=1.36%, sys=1.46%, ctx=582, majf=0, minf=1 00:20:16.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.349 issued rwts: total=69,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.349 job3: (groupid=0, jobs=1): err= 0: pid=1502050: Tue Jun 11 12:14:29 2024 00:20:16.349 read: IOPS=501, BW=2008KiB/s (2056kB/s)(2060KiB/1026msec) 00:20:16.349 slat (nsec): min=6969, max=56056, avg=23649.85, stdev=8142.97 00:20:16.349 clat (usec): min=421, max=41961, avg=1001.61, stdev=3122.99 00:20:16.349 lat (usec): min=447, max=41988, avg=1025.26, stdev=3123.32 00:20:16.349 clat percentiles (usec): 00:20:16.349 | 1.00th=[ 553], 5.00th=[ 619], 10.00th=[ 660], 20.00th=[ 709], 00:20:16.349 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 791], 00:20:16.349 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 881], 00:20:16.349 | 99.00th=[ 947], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:20:16.349 | 99.99th=[42206] 00:20:16.349 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:20:16.349 slat (usec): min=8, max=2252, avg=31.33, stdev=70.48 00:20:16.349 clat (usec): min=202, max=850, avg=443.93, stdev=107.57 00:20:16.349 lat (usec): min=211, max=2555, avg=475.26, stdev=130.42 00:20:16.349 clat percentiles (usec): 00:20:16.349 | 1.00th=[ 245], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 351], 00:20:16.349 | 30.00th=[ 379], 40.00th=[ 416], 50.00th=[ 449], 60.00th=[ 469], 00:20:16.349 | 70.00th=[ 490], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[ 644], 00:20:16.349 | 99.00th=[ 734], 99.50th=[ 766], 99.90th=[ 832], 99.95th=[ 848], 00:20:16.349 | 99.99th=[ 848] 00:20:16.349 bw ( KiB/s): min= 4096, max= 4096, per=34.27%, avg=4096.00, stdev= 0.00, samples=2 00:20:16.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:20:16.349 lat (usec) : 250=0.91%, 500=48.28%, 750=29.89%, 1000=20.73% 00:20:16.349 lat (msec) : 50=0.19% 00:20:16.349 cpu : usr=2.15%, sys=4.29%, ctx=1542, majf=0, minf=1 00:20:16.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.349 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:16.349 00:20:16.349 Run status group 0 (all jobs): 00:20:16.349 READ: bw=6175KiB/s (6323kB/s), 268KiB/s-2024KiB/s (275kB/s-2072kB/s), io=6348KiB (6500kB), run=1018-1028msec 00:20:16.349 WRITE: bw=11.7MiB/s (12.2MB/s), 1992KiB/s-4024KiB/s (2040kB/s-4120kB/s), io=12.0MiB (12.6MB), run=1018-1028msec 00:20:16.349 00:20:16.349 Disk stats (read/write): 00:20:16.349 nvme0n1: ios=567/907, merge=0/0, ticks=465/376, in_queue=841, util=86.77% 00:20:16.349 nvme0n2: ios=479/512, merge=0/0, ticks=1275/238, in_queue=1513, util=88.16% 00:20:16.349 nvme0n3: ios=60/512, merge=0/0, ticks=1366/237, in_queue=1603, util=92.29% 00:20:16.349 nvme0n4: ios=560/916, merge=0/0, ticks=473/377, in_queue=850, util=96.90% 00:20:16.349 12:14:29 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:16.349 [global] 00:20:16.349 thread=1 00:20:16.349 invalidate=1 00:20:16.349 rw=randwrite 00:20:16.349 time_based=1 00:20:16.349 runtime=1 00:20:16.349 ioengine=libaio 00:20:16.349 direct=1 00:20:16.349 bs=4096 00:20:16.349 iodepth=1 00:20:16.349 norandommap=0 00:20:16.349 numjobs=1 00:20:16.349 00:20:16.349 verify_dump=1 00:20:16.349 verify_backlog=512 00:20:16.349 verify_state_save=0 00:20:16.349 do_verify=1 00:20:16.349 verify=crc32c-intel 00:20:16.349 [job0] 00:20:16.349 filename=/dev/nvme0n1 00:20:16.349 [job1] 00:20:16.349 filename=/dev/nvme0n2 00:20:16.349 [job2] 00:20:16.349 filename=/dev/nvme0n3 00:20:16.349 [job3] 00:20:16.349 filename=/dev/nvme0n4 00:20:16.349 Could not set queue depth (nvme0n1) 00:20:16.349 Could not set queue depth (nvme0n2) 00:20:16.349 Could not set queue depth (nvme0n3) 00:20:16.349 Could not set queue depth (nvme0n4) 00:20:16.610 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.611 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.611 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.611 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.611 fio-3.35 00:20:16.611 Starting 4 threads 00:20:18.015 00:20:18.015 job0: (groupid=0, jobs=1): err= 0: pid=1502558: Tue Jun 11 12:14:30 2024 00:20:18.015 read: IOPS=114, BW=460KiB/s (471kB/s)(460KiB/1001msec) 00:20:18.015 slat (nsec): min=9966, max=24122, avg=23583.36, stdev=1288.60 00:20:18.015 clat (usec): min=842, max=42310, avg=5703.47, stdev=12995.30 00:20:18.015 lat (usec): min=866, max=42333, avg=5727.06, stdev=12994.99 00:20:18.015 clat percentiles (usec): 00:20:18.015 | 1.00th=[ 881], 5.00th=[ 988], 10.00th=[ 1012], 20.00th=[ 1045], 00:20:18.015 | 30.00th=[ 1074], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1106], 00:20:18.015 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[41157], 95.00th=[42206], 00:20:18.015 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:18.015 | 99.99th=[42206] 00:20:18.015 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:18.015 slat (nsec): min=8942, max=57428, avg=27592.79, stdev=7380.93 00:20:18.015 clat (usec): min=240, max=932, avg=631.93, stdev=113.63 00:20:18.015 lat (usec): min=262, max=962, avg=659.52, stdev=116.28 00:20:18.015 clat percentiles (usec): 00:20:18.015 | 1.00th=[ 334], 5.00th=[ 420], 10.00th=[ 474], 20.00th=[ 545], 00:20:18.015 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:20:18.015 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:20:18.015 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 930], 00:20:18.015 | 99.99th=[ 930] 00:20:18.015 bw ( KiB/s): min= 4096, max= 4096, per=50.75%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.015 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.015 lat (usec) : 250=0.16%, 500=11.16%, 750=60.61%, 1000=11.00% 00:20:18.015 lat (msec) : 2=14.99%, 50=2.07% 00:20:18.015 cpu : usr=1.20%, sys=1.40%, ctx=627, majf=0, minf=1 00:20:18.015 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.015 issued rwts: total=115,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.015 job1: (groupid=0, jobs=1): err= 0: pid=1502578: Tue Jun 11 12:14:30 2024 00:20:18.015 read: IOPS=17, BW=71.7KiB/s (73.4kB/s)(72.0KiB/1004msec) 00:20:18.015 slat (nsec): min=25426, max=26437, avg=25711.00, stdev=257.01 00:20:18.015 clat (usec): min=836, max=42106, avg=39634.71, stdev=9686.07 00:20:18.015 lat (usec): min=862, max=42132, avg=39660.42, stdev=9686.09 00:20:18.015 clat percentiles (usec): 00:20:18.015 | 1.00th=[ 840], 5.00th=[ 840], 10.00th=[41157], 20.00th=[41681], 00:20:18.015 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:18.015 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:18.015 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:18.015 | 99.99th=[42206] 00:20:18.015 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:20:18.015 slat (nsec): min=8926, max=63954, avg=29926.17, stdev=9057.41 00:20:18.015 clat (usec): min=127, max=812, avg=520.45, stdev=120.61 00:20:18.015 lat (usec): min=137, max=845, avg=550.38, stdev=124.26 00:20:18.015 clat percentiles (usec): 00:20:18.016 | 1.00th=[ 237], 5.00th=[ 306], 10.00th=[ 363], 20.00th=[ 408], 00:20:18.016 | 30.00th=[ 461], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 553], 00:20:18.016 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 709], 00:20:18.016 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:20:18.016 | 99.99th=[ 816] 00:20:18.016 bw ( KiB/s): min= 4096, max= 4096, per=50.75%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.016 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.016 lat (usec) : 250=1.32%, 500=37.36%, 750=56.79%, 1000=1.32% 00:20:18.016 lat (msec) : 50=3.21% 00:20:18.016 cpu : usr=1.20%, sys=1.89%, ctx=532, majf=0, minf=1 00:20:18.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.016 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.016 job2: (groupid=0, jobs=1): err= 0: pid=1502580: Tue Jun 11 12:14:30 2024 00:20:18.016 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1015msec) 00:20:18.016 slat (nsec): min=25665, max=26341, avg=25962.44, stdev=188.74 00:20:18.016 clat (usec): min=40782, max=42943, avg=41903.20, stdev=571.44 00:20:18.016 lat (usec): min=40808, max=42969, avg=41929.16, stdev=571.37 00:20:18.016 clat percentiles (usec): 00:20:18.016 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:20:18.016 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:18.016 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:20:18.016 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:18.016 | 99.99th=[42730] 00:20:18.016 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:20:18.016 slat (nsec): min=8766, max=80745, avg=28933.00, stdev=8731.61 00:20:18.016 clat (usec): min=214, max=1122, avg=635.10, stdev=121.13 00:20:18.016 lat (usec): min=226, max=1154, avg=664.04, stdev=124.54 00:20:18.016 clat percentiles (usec): 00:20:18.016 | 1.00th=[ 338], 5.00th=[ 412], 10.00th=[ 482], 20.00th=[ 545], 00:20:18.016 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 668], 00:20:18.016 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 807], 00:20:18.016 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 1123], 99.95th=[ 1123], 00:20:18.016 | 99.99th=[ 1123] 00:20:18.016 bw ( KiB/s): min= 4096, max= 4096, per=50.75%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.016 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.016 lat (usec) : 250=0.19%, 500=13.26%, 750=70.08%, 1000=13.07% 00:20:18.016 lat (msec) : 2=0.38%, 50=3.03% 00:20:18.016 cpu : usr=1.28%, sys=1.58%, ctx=529, majf=0, minf=1 00:20:18.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.016 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.016 job3: (groupid=0, jobs=1): err= 0: pid=1502581: Tue Jun 11 12:14:30 2024 00:20:18.016 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:20:18.016 slat (nsec): min=25615, max=42898, avg=26803.57, stdev=3699.34 00:20:18.016 clat (usec): min=666, max=42000, avg=39758.97, stdev=8966.30 00:20:18.016 lat (usec): min=692, max=42026, avg=39785.77, stdev=8966.47 00:20:18.016 clat percentiles (usec): 00:20:18.016 | 1.00th=[ 668], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:20:18.016 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:18.016 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:18.016 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:18.016 | 99.99th=[42206] 00:20:18.016 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:20:18.016 slat (nsec): min=9406, max=49838, avg=23908.31, stdev=11295.70 00:20:18.016 clat (usec): min=112, max=588, avg=294.46, stdev=75.41 00:20:18.016 lat (usec): min=122, max=621, avg=318.36, stdev=78.30 00:20:18.016 clat percentiles (usec): 00:20:18.016 | 1.00th=[ 125], 5.00th=[ 139], 10.00th=[ 180], 20.00th=[ 245], 00:20:18.016 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 00:20:18.016 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 375], 95.00th=[ 416], 00:20:18.016 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[ 586], 99.95th=[ 586], 00:20:18.016 | 99.99th=[ 586] 00:20:18.016 bw ( KiB/s): min= 4096, max= 4096, per=50.75%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.016 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.016 lat (usec) : 250=20.08%, 500=75.80%, 750=0.38% 00:20:18.016 lat (msec) : 50=3.75% 00:20:18.016 cpu : usr=0.40%, sys=1.39%, ctx=536, majf=0, minf=1 00:20:18.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.016 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.016 00:20:18.016 Run status group 0 (all jobs): 00:20:18.016 READ: bw=670KiB/s (686kB/s), 63.1KiB/s-460KiB/s (64.6kB/s-471kB/s), io=680KiB (696kB), run=1001-1015msec 00:20:18.016 WRITE: bw=8071KiB/s (8265kB/s), 2018KiB/s-2046KiB/s (2066kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1015msec 00:20:18.016 00:20:18.016 Disk stats (read/write): 00:20:18.016 nvme0n1: ios=126/512, merge=0/0, ticks=496/314, in_queue=810, util=86.37% 00:20:18.016 nvme0n2: ios=49/512, merge=0/0, ticks=1522/192, in_queue=1714, util=99.08% 00:20:18.016 nvme0n3: ios=59/512, merge=0/0, ticks=526/263, in_queue=789, util=91.30% 00:20:18.016 nvme0n4: ios=74/512, merge=0/0, ticks=1549/142, in_queue=1691, util=100.00% 00:20:18.016 12:14:30 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:18.016 [global] 00:20:18.016 thread=1 00:20:18.016 invalidate=1 00:20:18.016 rw=write 00:20:18.016 time_based=1 00:20:18.016 runtime=1 00:20:18.016 ioengine=libaio 00:20:18.016 direct=1 00:20:18.016 bs=4096 00:20:18.016 iodepth=128 00:20:18.016 norandommap=0 00:20:18.016 numjobs=1 00:20:18.016 00:20:18.016 verify_dump=1 00:20:18.016 verify_backlog=512 00:20:18.016 verify_state_save=0 00:20:18.016 do_verify=1 00:20:18.016 verify=crc32c-intel 00:20:18.016 [job0] 00:20:18.016 filename=/dev/nvme0n1 00:20:18.016 [job1] 00:20:18.016 filename=/dev/nvme0n2 00:20:18.016 [job2] 00:20:18.016 filename=/dev/nvme0n3 00:20:18.016 [job3] 00:20:18.016 filename=/dev/nvme0n4 00:20:18.016 Could not set queue depth (nvme0n1) 00:20:18.016 Could not set queue depth (nvme0n2) 00:20:18.016 Could not set queue depth (nvme0n3) 00:20:18.016 Could not set queue depth (nvme0n4) 00:20:18.283 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:18.283 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:18.283 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:18.283 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:18.283 fio-3.35 00:20:18.283 Starting 4 threads 00:20:19.688 00:20:19.688 job0: (groupid=0, jobs=1): err= 0: pid=1503027: Tue Jun 11 12:14:32 2024 00:20:19.688 read: IOPS=7202, BW=28.1MiB/s (29.5MB/s)(28.3MiB/1006msec) 00:20:19.688 slat (nsec): min=940, max=7913.6k, avg=44724.60, stdev=335590.80 00:20:19.688 clat (usec): min=1851, max=16931, avg=6028.16, stdev=1948.83 00:20:19.688 lat (usec): min=1856, max=18850, avg=6072.88, stdev=1966.95 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 2573], 5.00th=[ 3949], 10.00th=[ 4621], 20.00th=[ 5014], 00:20:19.688 | 30.00th=[ 5145], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5735], 00:20:19.688 | 70.00th=[ 6063], 80.00th=[ 6980], 90.00th=[ 8225], 95.00th=[ 9503], 00:20:19.688 | 99.00th=[14615], 99.50th=[14746], 99.90th=[15926], 99.95th=[15926], 00:20:19.688 | 99.99th=[16909] 00:20:19.688 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:20:19.688 slat (nsec): min=1621, max=53261k, avg=84163.89, stdev=1322704.87 00:20:19.688 clat (usec): min=1003, max=235143, avg=8627.22, stdev=15861.71 00:20:19.688 lat (usec): min=1011, max=235151, avg=8711.38, stdev=16087.96 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 1549], 5.00th=[ 3097], 10.00th=[ 3458], 20.00th=[ 4113], 00:20:19.688 | 30.00th=[ 4752], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5407], 00:20:19.688 | 70.00th=[ 5604], 80.00th=[ 5866], 90.00th=[ 10028], 95.00th=[ 35914], 00:20:19.688 | 99.00th=[ 52167], 99.50th=[103285], 99.90th=[204473], 99.95th=[235930], 00:20:19.688 | 99.99th=[235930] 00:20:19.688 bw ( KiB/s): min=16384, max=44664, per=40.87%, avg=30524.00, stdev=19996.98, samples=2 00:20:19.688 iops : min= 4096, max=11166, avg=7631.00, stdev=4999.24, samples=2 00:20:19.688 lat (msec) : 2=1.05%, 4=10.67%, 10=81.13%, 20=3.21%, 50=2.91% 00:20:19.688 lat (msec) : 100=0.61%, 250=0.43% 00:20:19.688 cpu : usr=3.38%, sys=6.57%, ctx=620, majf=0, minf=1 00:20:19.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:19.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:19.688 issued rwts: total=7246,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:19.688 job1: (groupid=0, jobs=1): err= 0: pid=1503042: Tue Jun 11 12:14:32 2024 00:20:19.688 read: IOPS=1804, BW=7217KiB/s (7390kB/s)(7556KiB/1047msec) 00:20:19.688 slat (nsec): min=857, max=29431k, avg=338182.61, stdev=2280800.91 00:20:19.688 clat (usec): min=6253, max=95004, avg=46687.06, stdev=26600.71 00:20:19.688 lat (usec): min=6260, max=95008, avg=47025.24, stdev=26685.96 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 6849], 5.00th=[ 7504], 10.00th=[11338], 20.00th=[21627], 00:20:19.688 | 30.00th=[23987], 40.00th=[31327], 50.00th=[52167], 60.00th=[57410], 00:20:19.688 | 70.00th=[61080], 80.00th=[76022], 90.00th=[82314], 95.00th=[86508], 00:20:19.688 | 99.00th=[94897], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:20:19.688 | 99.99th=[94897] 00:20:19.688 write: IOPS=1956, BW=7824KiB/s (8012kB/s)(8192KiB/1047msec); 0 zone resets 00:20:19.688 slat (nsec): min=1538, max=24991k, avg=171310.57, stdev=1202026.73 00:20:19.688 clat (usec): min=1135, max=64138, avg=21761.18, stdev=16619.29 00:20:19.688 lat (usec): min=1145, max=64145, avg=21932.49, stdev=16727.42 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 1958], 5.00th=[ 6128], 10.00th=[ 7635], 20.00th=[ 8225], 00:20:19.688 | 30.00th=[ 9503], 40.00th=[11338], 50.00th=[17695], 60.00th=[22152], 00:20:19.688 | 70.00th=[25035], 80.00th=[28967], 90.00th=[54264], 95.00th=[58983], 00:20:19.688 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:20:19.688 | 99.99th=[64226] 00:20:19.688 bw ( KiB/s): min= 4096, max=12288, per=10.97%, avg=8192.00, stdev=5792.62, samples=2 00:20:19.688 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:20:19.688 lat (msec) : 2=0.53%, 4=0.81%, 10=19.46%, 20=16.26%, 50=32.59% 00:20:19.688 lat (msec) : 100=30.35% 00:20:19.688 cpu : usr=1.15%, sys=1.91%, ctx=223, majf=0, minf=1 00:20:19.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:20:19.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:19.688 issued rwts: total=1889,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:19.688 job2: (groupid=0, jobs=1): err= 0: pid=1503067: Tue Jun 11 12:14:32 2024 00:20:19.688 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:20:19.688 slat (nsec): min=905, max=17658k, avg=143641.46, stdev=917235.43 00:20:19.688 clat (usec): min=5742, max=70464, avg=17583.17, stdev=12269.23 00:20:19.688 lat (usec): min=6139, max=70473, avg=17726.81, stdev=12390.19 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 6783], 5.00th=[ 7504], 10.00th=[ 8979], 20.00th=[ 9372], 00:20:19.688 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11994], 60.00th=[12518], 00:20:19.688 | 70.00th=[15533], 80.00th=[29754], 90.00th=[36963], 95.00th=[41157], 00:20:19.688 | 99.00th=[54789], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:20:19.688 | 99.99th=[70779] 00:20:19.688 write: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1007msec); 0 zone resets 00:20:19.688 slat (nsec): min=1616, max=20293k, avg=202368.94, stdev=956324.46 00:20:19.688 clat (usec): min=5090, max=98901, avg=26123.43, stdev=25561.16 00:20:19.688 lat (usec): min=5119, max=98925, avg=26325.79, stdev=25730.19 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 5997], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9241], 00:20:19.688 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[15401], 60.00th=[22414], 00:20:19.688 | 70.00th=[25035], 80.00th=[28705], 90.00th=[80217], 95.00th=[84411], 00:20:19.688 | 99.00th=[94897], 99.50th=[94897], 99.90th=[99091], 99.95th=[99091], 00:20:19.688 | 99.99th=[99091] 00:20:19.688 bw ( KiB/s): min= 6600, max=16760, per=15.64%, avg=11680.00, stdev=7184.20, samples=2 00:20:19.688 iops : min= 1650, max= 4190, avg=2920.00, stdev=1796.05, samples=2 00:20:19.688 lat (msec) : 10=36.44%, 20=27.55%, 50=26.63%, 100=9.38% 00:20:19.688 cpu : usr=2.78%, sys=2.39%, ctx=376, majf=0, minf=1 00:20:19.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:19.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:19.688 issued rwts: total=2560,3047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:19.688 job3: (groupid=0, jobs=1): err= 0: pid=1503074: Tue Jun 11 12:14:32 2024 00:20:19.688 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:20:19.688 slat (nsec): min=1897, max=11812k, avg=67523.72, stdev=573097.24 00:20:19.688 clat (usec): min=2336, max=26834, avg=9769.64, stdev=3434.23 00:20:19.688 lat (usec): min=2341, max=26839, avg=9837.17, stdev=3477.69 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6390], 00:20:19.688 | 30.00th=[ 7242], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[10421], 00:20:19.688 | 70.00th=[11600], 80.00th=[12518], 90.00th=[14222], 95.00th=[15270], 00:20:19.688 | 99.00th=[18482], 99.50th=[22152], 99.90th=[23200], 99.95th=[26870], 00:20:19.688 | 99.99th=[26870] 00:20:19.688 write: IOPS=6747, BW=26.4MiB/s (27.6MB/s)(26.5MiB/1004msec); 0 zone resets 00:20:19.688 slat (nsec): min=1558, max=8626.8k, avg=67178.14, stdev=501045.00 00:20:19.688 clat (usec): min=1198, max=29033, avg=9218.49, stdev=5493.34 00:20:19.688 lat (usec): min=1209, max=29042, avg=9285.66, stdev=5527.27 00:20:19.688 clat percentiles (usec): 00:20:19.688 | 1.00th=[ 2073], 5.00th=[ 3392], 10.00th=[ 4080], 20.00th=[ 5669], 00:20:19.688 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 7111], 60.00th=[ 9110], 00:20:19.688 | 70.00th=[10290], 80.00th=[13042], 90.00th=[14615], 95.00th=[23725], 00:20:19.688 | 99.00th=[26870], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:20:19.688 | 99.99th=[28967] 00:20:19.688 bw ( KiB/s): min=20816, max=32488, per=35.69%, avg=26652.00, stdev=8253.35, samples=2 00:20:19.688 iops : min= 5204, max= 8122, avg=6663.00, stdev=2063.34, samples=2 00:20:19.688 lat (msec) : 2=0.41%, 4=4.21%, 10=56.45%, 20=35.23%, 50=3.70% 00:20:19.688 cpu : usr=4.69%, sys=7.38%, ctx=341, majf=0, minf=1 00:20:19.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:19.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:19.688 issued rwts: total=6656,6774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:19.688 00:20:19.688 Run status group 0 (all jobs): 00:20:19.689 READ: bw=68.5MiB/s (71.8MB/s), 7217KiB/s-28.1MiB/s (7390kB/s-29.5MB/s), io=71.7MiB (75.2MB), run=1004-1047msec 00:20:19.689 WRITE: bw=72.9MiB/s (76.5MB/s), 7824KiB/s-29.8MiB/s (8012kB/s-31.3MB/s), io=76.4MiB (80.1MB), run=1004-1047msec 00:20:19.689 00:20:19.689 Disk stats (read/write): 00:20:19.689 nvme0n1: ios=5662/6111, merge=0/0, ticks=32670/35737, in_queue=68407, util=98.60% 00:20:19.689 nvme0n2: ios=1570/1696, merge=0/0, ticks=19642/10235, in_queue=29877, util=87.46% 00:20:19.689 nvme0n3: ios=2319/2560, merge=0/0, ticks=17730/32396, in_queue=50126, util=96.73% 00:20:19.689 nvme0n4: ios=5653/5816, merge=0/0, ticks=52482/48858, in_queue=101340, util=91.14% 00:20:19.689 12:14:32 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:19.689 [global] 00:20:19.689 thread=1 00:20:19.689 invalidate=1 00:20:19.689 rw=randwrite 00:20:19.689 time_based=1 00:20:19.689 runtime=1 00:20:19.689 ioengine=libaio 00:20:19.689 direct=1 00:20:19.689 bs=4096 00:20:19.689 iodepth=128 00:20:19.689 norandommap=0 00:20:19.689 numjobs=1 00:20:19.689 00:20:19.689 verify_dump=1 00:20:19.689 verify_backlog=512 00:20:19.689 verify_state_save=0 00:20:19.689 do_verify=1 00:20:19.689 verify=crc32c-intel 00:20:19.689 [job0] 00:20:19.689 filename=/dev/nvme0n1 00:20:19.689 [job1] 00:20:19.689 filename=/dev/nvme0n2 00:20:19.689 [job2] 00:20:19.689 filename=/dev/nvme0n3 00:20:19.689 [job3] 00:20:19.689 filename=/dev/nvme0n4 00:20:19.689 Could not set queue depth (nvme0n1) 00:20:19.689 Could not set queue depth (nvme0n2) 00:20:19.689 Could not set queue depth (nvme0n3) 00:20:19.689 Could not set queue depth (nvme0n4) 00:20:19.950 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:19.950 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:19.950 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:19.950 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:19.950 fio-3.35 00:20:19.950 Starting 4 threads 00:20:21.362 00:20:21.362 job0: (groupid=0, jobs=1): err= 0: pid=1503569: Tue Jun 11 12:14:34 2024 00:20:21.362 read: IOPS=7364, BW=28.8MiB/s (30.2MB/s)(28.8MiB/1002msec) 00:20:21.362 slat (nsec): min=882, max=4185.4k, avg=65875.94, stdev=401863.77 00:20:21.362 clat (usec): min=1083, max=14803, avg=8360.90, stdev=1059.33 00:20:21.362 lat (usec): min=3146, max=14804, avg=8426.78, stdev=1109.09 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 7832], 00:20:21.362 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8455], 00:20:21.362 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:20:21.362 | 99.00th=[11600], 99.50th=[12125], 99.90th=[13304], 99.95th=[13698], 00:20:21.362 | 99.99th=[14746] 00:20:21.362 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:20:21.362 slat (nsec): min=1512, max=6405.2k, avg=63140.36, stdev=303875.04 00:20:21.362 clat (usec): min=4719, max=15210, avg=8448.08, stdev=1212.64 00:20:21.362 lat (usec): min=4723, max=15242, avg=8511.22, stdev=1233.14 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 5211], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 7832], 00:20:21.362 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:20:21.362 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10552], 00:20:21.362 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15008], 99.95th=[15008], 00:20:21.362 | 99.99th=[15270] 00:20:21.362 bw ( KiB/s): min=29184, max=32256, per=27.53%, avg=30720.00, stdev=2172.23, samples=2 00:20:21.362 iops : min= 7296, max= 8064, avg=7680.00, stdev=543.06, samples=2 00:20:21.362 lat (msec) : 2=0.01%, 4=0.28%, 10=93.96%, 20=5.75% 00:20:21.362 cpu : usr=3.80%, sys=5.99%, ctx=913, majf=0, minf=1 00:20:21.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.362 issued rwts: total=7379,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.362 job1: (groupid=0, jobs=1): err= 0: pid=1503583: Tue Jun 11 12:14:34 2024 00:20:21.362 read: IOPS=7462, BW=29.2MiB/s (30.6MB/s)(30.5MiB/1046msec) 00:20:21.362 slat (nsec): min=847, max=6955.4k, avg=63956.68, stdev=417737.48 00:20:21.362 clat (usec): min=4406, max=53159, avg=8663.86, stdev=5284.44 00:20:21.362 lat (usec): min=4410, max=53161, avg=8727.82, stdev=5297.01 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 7046], 20.00th=[ 7570], 00:20:21.362 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8094], 00:20:21.362 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[10683], 00:20:21.362 | 99.00th=[46924], 99.50th=[50594], 99.90th=[52691], 99.95th=[53216], 00:20:21.362 | 99.99th=[53216] 00:20:21.362 write: IOPS=7831, BW=30.6MiB/s (32.1MB/s)(32.0MiB/1046msec); 0 zone resets 00:20:21.362 slat (nsec): min=1435, max=4337.6k, avg=57744.37, stdev=263946.34 00:20:21.362 clat (usec): min=2092, max=53162, avg=7935.10, stdev=1052.58 00:20:21.362 lat (usec): min=2101, max=53165, avg=7992.84, stdev=1069.37 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 4817], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 7504], 00:20:21.362 | 30.00th=[ 7701], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:20:21.362 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9241], 00:20:21.362 | 99.00th=[10552], 99.50th=[10945], 99.90th=[12125], 99.95th=[12649], 00:20:21.362 | 99.99th=[53216] 00:20:21.362 bw ( KiB/s): min=32752, max=32768, per=29.35%, avg=32760.00, stdev=11.31, samples=2 00:20:21.362 iops : min= 8188, max= 8192, avg=8190.00, stdev= 2.83, samples=2 00:20:21.362 lat (msec) : 4=0.06%, 10=95.12%, 20=4.03%, 50=0.40%, 100=0.39% 00:20:21.362 cpu : usr=3.16%, sys=6.51%, ctx=1030, majf=0, minf=1 00:20:21.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.362 issued rwts: total=7806,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.362 job2: (groupid=0, jobs=1): err= 0: pid=1503601: Tue Jun 11 12:14:34 2024 00:20:21.362 read: IOPS=5394, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1006msec) 00:20:21.362 slat (nsec): min=916, max=10570k, avg=94707.75, stdev=732497.75 00:20:21.362 clat (usec): min=1995, max=22668, avg=12242.46, stdev=3112.81 00:20:21.362 lat (usec): min=2001, max=22677, avg=12337.17, stdev=3144.25 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 6063], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10028], 00:20:21.362 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:20:21.362 | 70.00th=[13042], 80.00th=[14615], 90.00th=[17433], 95.00th=[19006], 00:20:21.362 | 99.00th=[20317], 99.50th=[20841], 99.90th=[22676], 99.95th=[22676], 00:20:21.362 | 99.99th=[22676] 00:20:21.362 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:20:21.362 slat (nsec): min=1565, max=10309k, avg=81653.88, stdev=669111.56 00:20:21.362 clat (usec): min=1164, max=21794, avg=10867.04, stdev=2882.68 00:20:21.362 lat (usec): min=1174, max=21811, avg=10948.70, stdev=2914.91 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 4178], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 7570], 00:20:21.362 | 30.00th=[ 9503], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:20:21.362 | 70.00th=[11863], 80.00th=[12125], 90.00th=[15795], 95.00th=[16188], 00:20:21.362 | 99.00th=[17433], 99.50th=[17433], 99.90th=[21627], 99.95th=[21627], 00:20:21.362 | 99.99th=[21890] 00:20:21.362 bw ( KiB/s): min=21808, max=23248, per=20.19%, avg=22528.00, stdev=1018.23, samples=2 00:20:21.362 iops : min= 5452, max= 5812, avg=5632.00, stdev=254.56, samples=2 00:20:21.362 lat (msec) : 2=0.05%, 4=0.32%, 10=26.57%, 20=72.07%, 50=0.99% 00:20:21.362 cpu : usr=4.28%, sys=5.57%, ctx=301, majf=0, minf=1 00:20:21.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.362 issued rwts: total=5427,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.362 job3: (groupid=0, jobs=1): err= 0: pid=1503607: Tue Jun 11 12:14:34 2024 00:20:21.362 read: IOPS=7251, BW=28.3MiB/s (29.7MB/s)(28.5MiB/1006msec) 00:20:21.362 slat (nsec): min=951, max=7936.7k, avg=68043.99, stdev=493936.14 00:20:21.362 clat (usec): min=2570, max=16140, avg=9053.64, stdev=2271.70 00:20:21.362 lat (usec): min=2986, max=16146, avg=9121.69, stdev=2289.71 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 4047], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7439], 00:20:21.362 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 9241], 00:20:21.362 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[12387], 95.00th=[13960], 00:20:21.362 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16057], 99.95th=[16188], 00:20:21.362 | 99.99th=[16188] 00:20:21.362 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:20:21.362 slat (nsec): min=1523, max=7425.8k, avg=60819.15, stdev=448388.14 00:20:21.362 clat (usec): min=815, max=16137, avg=8013.65, stdev=2259.97 00:20:21.362 lat (usec): min=827, max=16140, avg=8074.47, stdev=2280.82 00:20:21.362 clat percentiles (usec): 00:20:21.362 | 1.00th=[ 2769], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5604], 00:20:21.362 | 30.00th=[ 6980], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:20:21.362 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[11600], 95.00th=[11863], 00:20:21.362 | 99.00th=[13435], 99.50th=[13566], 99.90th=[15664], 99.95th=[15926], 00:20:21.362 | 99.99th=[16188] 00:20:21.362 bw ( KiB/s): min=30056, max=31376, per=27.52%, avg=30716.00, stdev=933.38, samples=2 00:20:21.362 iops : min= 7514, max= 7844, avg=7679.00, stdev=233.35, samples=2 00:20:21.362 lat (usec) : 1000=0.02% 00:20:21.362 lat (msec) : 2=0.19%, 4=2.00%, 10=77.77%, 20=20.03% 00:20:21.362 cpu : usr=5.37%, sys=7.06%, ctx=540, majf=0, minf=1 00:20:21.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:21.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.362 issued rwts: total=7295,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.362 00:20:21.362 Run status group 0 (all jobs): 00:20:21.362 READ: bw=104MiB/s (109MB/s), 21.1MiB/s-29.2MiB/s (22.1MB/s-30.6MB/s), io=109MiB (114MB), run=1002-1046msec 00:20:21.362 WRITE: bw=109MiB/s (114MB/s), 21.9MiB/s-30.6MiB/s (22.9MB/s-32.1MB/s), io=114MiB (120MB), run=1002-1046msec 00:20:21.362 00:20:21.362 Disk stats (read/write): 00:20:21.362 nvme0n1: ios=6175/6487, merge=0/0, ticks=24210/23103, in_queue=47313, util=96.49% 00:20:21.362 nvme0n2: ios=6690/6687, merge=0/0, ticks=26265/24438, in_queue=50703, util=86.85% 00:20:21.362 nvme0n3: ios=4531/4608, merge=0/0, ticks=53280/47997, in_queue=101277, util=91.99% 00:20:21.363 nvme0n4: ios=6144/6335, merge=0/0, ticks=53216/48263, in_queue=101479, util=89.54% 00:20:21.363 12:14:34 -- target/fio.sh@55 -- # sync 00:20:21.363 12:14:34 -- target/fio.sh@59 -- # fio_pid=1503692 00:20:21.363 12:14:34 -- target/fio.sh@61 -- # sleep 3 00:20:21.363 12:14:34 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:21.363 [global] 00:20:21.363 thread=1 00:20:21.363 invalidate=1 00:20:21.363 rw=read 00:20:21.363 time_based=1 00:20:21.363 runtime=10 00:20:21.363 ioengine=libaio 00:20:21.363 direct=1 00:20:21.363 bs=4096 00:20:21.363 iodepth=1 00:20:21.363 norandommap=1 00:20:21.363 numjobs=1 00:20:21.363 00:20:21.363 [job0] 00:20:21.363 filename=/dev/nvme0n1 00:20:21.363 [job1] 00:20:21.363 filename=/dev/nvme0n2 00:20:21.363 [job2] 00:20:21.363 filename=/dev/nvme0n3 00:20:21.363 [job3] 00:20:21.363 filename=/dev/nvme0n4 00:20:21.363 Could not set queue depth (nvme0n1) 00:20:21.363 Could not set queue depth (nvme0n2) 00:20:21.363 Could not set queue depth (nvme0n3) 00:20:21.363 Could not set queue depth (nvme0n4) 00:20:21.629 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.629 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.629 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.629 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.629 fio-3.35 00:20:21.629 Starting 4 threads 00:20:24.173 12:14:37 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:24.173 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2265088, buflen=4096 00:20:24.173 fio: pid=1504093, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:24.433 12:14:37 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:24.433 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=753664, buflen=4096 00:20:24.433 fio: pid=1504087, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:24.433 12:14:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:24.433 12:14:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:24.694 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=286720, buflen=4096 00:20:24.694 fio: pid=1504050, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:24.694 12:14:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:24.694 12:14:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:24.694 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=991232, buflen=4096 00:20:24.694 fio: pid=1504067, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:24.694 12:14:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:24.694 12:14:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:24.694 00:20:24.694 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1504050: Tue Jun 11 12:14:37 2024 00:20:24.695 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(280KiB/2895msec) 00:20:24.695 slat (usec): min=23, max=247, avg=27.49, stdev=26.50 00:20:24.695 clat (usec): min=923, max=42942, avg=41297.24, stdev=4918.14 00:20:24.695 lat (usec): min=961, max=42966, avg=41324.76, stdev=4916.66 00:20:24.695 clat percentiles (usec): 00:20:24.695 | 1.00th=[ 922], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:20:24.695 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:24.695 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:20:24.695 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:24.695 | 99.99th=[42730] 00:20:24.695 bw ( KiB/s): min= 96, max= 96, per=6.96%, avg=96.00, stdev= 0.00, samples=5 00:20:24.695 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:20:24.695 lat (usec) : 1000=1.41% 00:20:24.695 lat (msec) : 50=97.18% 00:20:24.695 cpu : usr=0.10%, sys=0.00%, ctx=72, majf=0, minf=1 00:20:24.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.695 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1504067: Tue Jun 11 12:14:37 2024 00:20:24.695 read: IOPS=79, BW=318KiB/s (326kB/s)(968KiB/3041msec) 00:20:24.695 slat (usec): min=6, max=12879, avg=172.04, stdev=1240.70 00:20:24.695 clat (usec): min=375, max=42049, avg=12382.69, stdev=18577.62 00:20:24.695 lat (usec): min=400, max=42073, avg=12555.34, stdev=18526.56 00:20:24.695 clat percentiles (usec): 00:20:24.695 | 1.00th=[ 404], 5.00th=[ 441], 10.00th=[ 490], 20.00th=[ 619], 00:20:24.695 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 766], 60.00th=[ 791], 00:20:24.695 | 70.00th=[ 865], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:20:24.695 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:24.695 | 99.99th=[42206] 00:20:24.695 bw ( KiB/s): min= 96, max= 96, per=6.96%, avg=96.00, stdev= 0.00, samples=5 00:20:24.695 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:20:24.695 lat (usec) : 500=10.29%, 750=34.57%, 1000=26.34% 00:20:24.695 lat (msec) : 50=28.40% 00:20:24.695 cpu : usr=0.00%, sys=0.26%, ctx=249, majf=0, minf=1 00:20:24.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 issued rwts: total=243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.695 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1504087: Tue Jun 11 12:14:37 2024 00:20:24.695 read: IOPS=67, BW=270KiB/s (277kB/s)(736KiB/2725msec) 00:20:24.695 slat (usec): min=5, max=250, avg=25.13, stdev=17.75 00:20:24.695 clat (usec): min=425, max=42042, avg=14773.54, stdev=19500.70 00:20:24.695 lat (usec): min=461, max=42067, avg=14798.67, stdev=19503.09 00:20:24.695 clat percentiles (usec): 00:20:24.695 | 1.00th=[ 441], 5.00th=[ 537], 10.00th=[ 586], 20.00th=[ 652], 00:20:24.695 | 30.00th=[ 750], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 906], 00:20:24.695 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:20:24.695 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:24.695 | 99.99th=[42206] 00:20:24.695 bw ( KiB/s): min= 96, max= 1040, per=20.58%, avg=284.80, stdev=422.17, samples=5 00:20:24.695 iops : min= 24, max= 260, avg=71.20, stdev=105.54, samples=5 00:20:24.695 lat (usec) : 500=3.24%, 750=26.49%, 1000=33.51% 00:20:24.695 lat (msec) : 2=2.16%, 50=34.05% 00:20:24.695 cpu : usr=0.07%, sys=0.18%, ctx=186, majf=0, minf=1 00:20:24.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 issued rwts: total=185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.695 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1504093: Tue Jun 11 12:14:37 2024 00:20:24.695 read: IOPS=215, BW=861KiB/s (882kB/s)(2212KiB/2568msec) 00:20:24.695 slat (nsec): min=5104, max=59970, avg=22971.32, stdev=6892.44 00:20:24.695 clat (usec): min=203, max=43025, avg=4612.06, stdev=12039.35 00:20:24.695 lat (usec): min=211, max=43049, avg=4635.03, stdev=12039.87 00:20:24.695 clat percentiles (usec): 00:20:24.695 | 1.00th=[ 441], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 668], 00:20:24.695 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 766], 60.00th=[ 775], 00:20:24.695 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 1057], 95.00th=[42206], 00:20:24.695 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:20:24.695 | 99.99th=[43254] 00:20:24.695 bw ( KiB/s): min= 96, max= 2224, per=63.85%, avg=881.60, stdev=1081.96, samples=5 00:20:24.695 iops : min= 24, max= 556, avg=220.40, stdev=270.49, samples=5 00:20:24.695 lat (usec) : 250=0.18%, 500=2.17%, 750=36.28%, 1000=50.36% 00:20:24.695 lat (msec) : 2=1.44%, 50=9.39% 00:20:24.695 cpu : usr=0.12%, sys=0.66%, ctx=554, majf=0, minf=2 00:20:24.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.695 issued rwts: total=554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.695 00:20:24.695 Run status group 0 (all jobs): 00:20:24.695 READ: bw=1380KiB/s (1413kB/s), 96.7KiB/s-861KiB/s (99.0kB/s-882kB/s), io=4196KiB (4297kB), run=2568-3041msec 00:20:24.695 00:20:24.695 Disk stats (read/write): 00:20:24.695 nvme0n1: ios=68/0, merge=0/0, ticks=2809/0, in_queue=2809, util=94.79% 00:20:24.695 nvme0n2: ios=68/0, merge=0/0, ticks=2800/0, in_queue=2800, util=95.30% 00:20:24.695 nvme0n3: ios=180/0, merge=0/0, ticks=2544/0, in_queue=2544, util=96.03% 00:20:24.695 nvme0n4: ios=547/0, merge=0/0, ticks=2290/0, in_queue=2290, util=96.06% 00:20:24.956 12:14:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:24.956 12:14:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:25.217 12:14:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:25.217 12:14:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:25.217 12:14:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:25.217 12:14:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:25.477 12:14:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:25.477 12:14:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:25.738 12:14:38 -- target/fio.sh@69 -- # fio_status=0 00:20:25.738 12:14:38 -- target/fio.sh@70 -- # wait 1503692 00:20:25.738 12:14:38 -- target/fio.sh@70 -- # fio_status=4 00:20:25.738 12:14:38 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:25.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:25.738 12:14:38 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:25.738 12:14:38 -- common/autotest_common.sh@1198 -- # local i=0 00:20:25.738 12:14:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:25.738 12:14:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:25.738 12:14:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:25.738 12:14:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:25.738 12:14:38 -- common/autotest_common.sh@1210 -- # return 0 00:20:25.738 12:14:38 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:25.738 12:14:38 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:25.738 nvmf hotplug test: fio failed as expected 00:20:25.738 12:14:38 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.998 12:14:38 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:25.998 12:14:38 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:25.998 12:14:38 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:25.998 12:14:38 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:25.998 12:14:38 -- target/fio.sh@91 -- # nvmftestfini 00:20:25.998 12:14:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:25.998 12:14:38 -- nvmf/common.sh@116 -- # sync 00:20:25.999 12:14:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:25.999 12:14:38 -- nvmf/common.sh@119 -- # set +e 00:20:25.999 12:14:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:25.999 12:14:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:25.999 rmmod nvme_tcp 00:20:25.999 rmmod nvme_fabrics 00:20:25.999 rmmod nvme_keyring 00:20:25.999 12:14:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:25.999 12:14:38 -- nvmf/common.sh@123 -- # set -e 00:20:25.999 12:14:38 -- nvmf/common.sh@124 -- # return 0 00:20:25.999 12:14:38 -- nvmf/common.sh@477 -- # '[' -n 1500247 ']' 00:20:25.999 12:14:38 -- nvmf/common.sh@478 -- # killprocess 1500247 00:20:25.999 12:14:38 -- common/autotest_common.sh@926 -- # '[' -z 1500247 ']' 00:20:25.999 12:14:38 -- common/autotest_common.sh@930 -- # kill -0 1500247 00:20:25.999 12:14:38 -- common/autotest_common.sh@931 -- # uname 00:20:25.999 12:14:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:25.999 12:14:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1500247 00:20:25.999 12:14:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:25.999 12:14:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:25.999 12:14:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1500247' 00:20:25.999 killing process with pid 1500247 00:20:25.999 12:14:38 -- common/autotest_common.sh@945 -- # kill 1500247 00:20:25.999 12:14:38 -- common/autotest_common.sh@950 -- # wait 1500247 00:20:25.999 12:14:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:25.999 12:14:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:25.999 12:14:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:25.999 12:14:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.999 12:14:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:25.999 12:14:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.999 12:14:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.999 12:14:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.542 12:14:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:28.542 00:20:28.542 real 0m28.059s 00:20:28.542 user 2m27.095s 00:20:28.542 sys 0m8.627s 00:20:28.542 12:14:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.542 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:20:28.542 ************************************ 00:20:28.542 END TEST nvmf_fio_target 00:20:28.542 ************************************ 00:20:28.542 12:14:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:28.542 12:14:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:28.542 12:14:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:28.542 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:20:28.542 ************************************ 00:20:28.542 START TEST nvmf_bdevio 00:20:28.542 ************************************ 00:20:28.542 12:14:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:28.542 * Looking for test storage... 00:20:28.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.542 12:14:41 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.542 12:14:41 -- nvmf/common.sh@7 -- # uname -s 00:20:28.542 12:14:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.542 12:14:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.542 12:14:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.542 12:14:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.542 12:14:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.542 12:14:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.542 12:14:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.542 12:14:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.542 12:14:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.542 12:14:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.542 12:14:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.542 12:14:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.542 12:14:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.542 12:14:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.542 12:14:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.542 12:14:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.542 12:14:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.542 12:14:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.542 12:14:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.543 12:14:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.543 12:14:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.543 12:14:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.543 12:14:41 -- paths/export.sh@5 -- # export PATH 00:20:28.543 12:14:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.543 12:14:41 -- nvmf/common.sh@46 -- # : 0 00:20:28.543 12:14:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:28.543 12:14:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:28.543 12:14:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:28.543 12:14:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.543 12:14:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.543 12:14:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:28.543 12:14:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:28.543 12:14:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:28.543 12:14:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:28.543 12:14:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:28.543 12:14:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:28.543 12:14:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:28.543 12:14:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.543 12:14:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:28.543 12:14:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:28.543 12:14:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:28.543 12:14:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.543 12:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.543 12:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.543 12:14:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:28.543 12:14:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:28.543 12:14:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:28.543 12:14:41 -- common/autotest_common.sh@10 -- # set +x 00:20:35.127 12:14:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:35.127 12:14:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:35.127 12:14:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:35.127 12:14:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:35.127 12:14:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:35.127 12:14:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:35.127 12:14:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:35.127 12:14:48 -- nvmf/common.sh@294 -- # net_devs=() 00:20:35.127 12:14:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:35.127 12:14:48 -- nvmf/common.sh@295 -- # e810=() 00:20:35.127 12:14:48 -- nvmf/common.sh@295 -- # local -ga e810 00:20:35.127 12:14:48 -- nvmf/common.sh@296 -- # x722=() 00:20:35.127 12:14:48 -- nvmf/common.sh@296 -- # local -ga x722 00:20:35.127 12:14:48 -- nvmf/common.sh@297 -- # mlx=() 00:20:35.127 12:14:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:35.127 12:14:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.127 12:14:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:35.127 12:14:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:35.127 12:14:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:35.127 12:14:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:35.127 12:14:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:35.127 12:14:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:35.127 12:14:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:35.127 12:14:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:35.127 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:35.127 12:14:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:35.127 12:14:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:35.128 12:14:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:35.128 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:35.128 12:14:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:35.128 12:14:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:35.128 12:14:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.128 12:14:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:35.128 12:14:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.128 12:14:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:35.128 Found net devices under 0000:31:00.0: cvl_0_0 00:20:35.128 12:14:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.128 12:14:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:35.128 12:14:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.128 12:14:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:35.128 12:14:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.128 12:14:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:35.128 Found net devices under 0000:31:00.1: cvl_0_1 00:20:35.128 12:14:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.128 12:14:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:35.128 12:14:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:35.128 12:14:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:35.128 12:14:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:35.128 12:14:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.128 12:14:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.128 12:14:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.128 12:14:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:35.128 12:14:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.128 12:14:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.128 12:14:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:35.128 12:14:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.128 12:14:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.128 12:14:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:35.128 12:14:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:35.128 12:14:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.128 12:14:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.390 12:14:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.390 12:14:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.390 12:14:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:35.390 12:14:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.390 12:14:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.390 12:14:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.390 12:14:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:35.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:20:35.390 00:20:35.390 --- 10.0.0.2 ping statistics --- 00:20:35.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.390 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:20:35.390 12:14:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:20:35.390 00:20:35.390 --- 10.0.0.1 ping statistics --- 00:20:35.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.390 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:20:35.390 12:14:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.390 12:14:48 -- nvmf/common.sh@410 -- # return 0 00:20:35.390 12:14:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:35.390 12:14:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.390 12:14:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:35.390 12:14:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:35.390 12:14:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.390 12:14:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:35.390 12:14:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:35.390 12:14:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:35.390 12:14:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:35.390 12:14:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:35.390 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:20:35.390 12:14:48 -- nvmf/common.sh@469 -- # nvmfpid=1509108 00:20:35.390 12:14:48 -- nvmf/common.sh@470 -- # waitforlisten 1509108 00:20:35.390 12:14:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:35.390 12:14:48 -- common/autotest_common.sh@819 -- # '[' -z 1509108 ']' 00:20:35.390 12:14:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.390 12:14:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.390 12:14:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.390 12:14:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.390 12:14:48 -- common/autotest_common.sh@10 -- # set +x 00:20:35.390 [2024-06-11 12:14:48.423916] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:35.390 [2024-06-11 12:14:48.423978] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.651 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.651 [2024-06-11 12:14:48.512606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.651 [2024-06-11 12:14:48.558812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.651 [2024-06-11 12:14:48.558957] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.651 [2024-06-11 12:14:48.558968] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.651 [2024-06-11 12:14:48.558976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.651 [2024-06-11 12:14:48.559074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:35.651 [2024-06-11 12:14:48.559277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:35.651 [2024-06-11 12:14:48.559439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.651 [2024-06-11 12:14:48.559439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:36.224 12:14:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.224 12:14:49 -- common/autotest_common.sh@852 -- # return 0 00:20:36.224 12:14:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.224 12:14:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:36.224 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.485 12:14:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.485 12:14:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.485 12:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.485 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.485 [2024-06-11 12:14:49.266727] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.485 12:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.485 12:14:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.485 12:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.485 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.485 Malloc0 00:20:36.485 12:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.485 12:14:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.485 12:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.485 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.485 12:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.485 12:14:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.485 12:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.485 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.485 12:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.485 12:14:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.485 12:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.485 12:14:49 -- common/autotest_common.sh@10 -- # set +x 00:20:36.485 [2024-06-11 12:14:49.331521] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.485 12:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.485 12:14:49 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:36.485 12:14:49 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:36.485 12:14:49 -- nvmf/common.sh@520 -- # config=() 00:20:36.485 12:14:49 -- nvmf/common.sh@520 -- # local subsystem config 00:20:36.485 12:14:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:36.485 12:14:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:36.485 { 00:20:36.485 "params": { 00:20:36.485 "name": "Nvme$subsystem", 00:20:36.485 "trtype": "$TEST_TRANSPORT", 00:20:36.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.485 "adrfam": "ipv4", 00:20:36.485 "trsvcid": "$NVMF_PORT", 00:20:36.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.485 "hdgst": ${hdgst:-false}, 00:20:36.485 "ddgst": ${ddgst:-false} 00:20:36.485 }, 00:20:36.485 "method": "bdev_nvme_attach_controller" 00:20:36.485 } 00:20:36.485 EOF 00:20:36.485 )") 00:20:36.485 12:14:49 -- nvmf/common.sh@542 -- # cat 00:20:36.485 12:14:49 -- nvmf/common.sh@544 -- # jq . 00:20:36.485 12:14:49 -- nvmf/common.sh@545 -- # IFS=, 00:20:36.485 12:14:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:36.485 "params": { 00:20:36.485 "name": "Nvme1", 00:20:36.485 "trtype": "tcp", 00:20:36.485 "traddr": "10.0.0.2", 00:20:36.485 "adrfam": "ipv4", 00:20:36.485 "trsvcid": "4420", 00:20:36.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.485 "hdgst": false, 00:20:36.485 "ddgst": false 00:20:36.485 }, 00:20:36.485 "method": "bdev_nvme_attach_controller" 00:20:36.485 }' 00:20:36.485 [2024-06-11 12:14:49.385348] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:36.485 [2024-06-11 12:14:49.385418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509305 ] 00:20:36.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.486 [2024-06-11 12:14:49.452758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:36.486 [2024-06-11 12:14:49.490733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.486 [2024-06-11 12:14:49.490854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.486 [2024-06-11 12:14:49.490857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.747 [2024-06-11 12:14:49.620078] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:36.747 [2024-06-11 12:14:49.620110] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:36.747 I/O targets: 00:20:36.747 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:36.747 00:20:36.747 00:20:36.747 CUnit - A unit testing framework for C - Version 2.1-3 00:20:36.747 http://cunit.sourceforge.net/ 00:20:36.747 00:20:36.747 00:20:36.747 Suite: bdevio tests on: Nvme1n1 00:20:36.747 Test: blockdev write read block ...passed 00:20:36.747 Test: blockdev write zeroes read block ...passed 00:20:36.747 Test: blockdev write zeroes read no split ...passed 00:20:36.747 Test: blockdev write zeroes read split ...passed 00:20:37.008 Test: blockdev write zeroes read split partial ...passed 00:20:37.008 Test: blockdev reset ...[2024-06-11 12:14:49.792606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.008 [2024-06-11 12:14:49.792669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe5290 (9): Bad file descriptor 00:20:37.008 [2024-06-11 12:14:49.851382] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:37.008 passed 00:20:37.008 Test: blockdev write read 8 blocks ...passed 00:20:37.008 Test: blockdev write read size > 128k ...passed 00:20:37.008 Test: blockdev write read invalid size ...passed 00:20:37.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:37.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:37.008 Test: blockdev write read max offset ...passed 00:20:37.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:37.008 Test: blockdev writev readv 8 blocks ...passed 00:20:37.008 Test: blockdev writev readv 30 x 1block ...passed 00:20:37.008 Test: blockdev writev readv block ...passed 00:20:37.008 Test: blockdev writev readv size > 128k ...passed 00:20:37.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:37.008 Test: blockdev comparev and writev ...[2024-06-11 12:14:50.034701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.034727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:37.008 [2024-06-11 12:14:50.034738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.034744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.008 [2024-06-11 12:14:50.035135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.035144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:37.008 [2024-06-11 12:14:50.035154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.035159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:37.008 [2024-06-11 12:14:50.035502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.035510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:37.008 [2024-06-11 12:14:50.035520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.035525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:37.008 [2024-06-11 12:14:50.035906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.035918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:37.008 [2024-06-11 12:14:50.035927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:37.008 [2024-06-11 12:14:50.035932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:37.269 passed 00:20:37.269 Test: blockdev nvme passthru rw ...passed 00:20:37.269 Test: blockdev nvme passthru vendor specific ...[2024-06-11 12:14:50.119830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:37.269 [2024-06-11 12:14:50.119845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:37.269 [2024-06-11 12:14:50.120208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:37.269 [2024-06-11 12:14:50.120215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:37.269 [2024-06-11 12:14:50.120593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:37.269 [2024-06-11 12:14:50.120599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:37.269 [2024-06-11 12:14:50.120918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:37.269 [2024-06-11 12:14:50.120924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:37.269 passed 00:20:37.269 Test: blockdev nvme admin passthru ...passed 00:20:37.269 Test: blockdev copy ...passed 00:20:37.269 00:20:37.269 Run Summary: Type Total Ran Passed Failed Inactive 00:20:37.269 suites 1 1 n/a 0 0 00:20:37.269 tests 23 23 23 0 0 00:20:37.269 asserts 152 152 152 0 n/a 00:20:37.269 00:20:37.269 Elapsed time = 1.117 seconds 00:20:37.269 12:14:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.269 12:14:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:37.269 12:14:50 -- common/autotest_common.sh@10 -- # set +x 00:20:37.269 12:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:37.269 12:14:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:37.269 12:14:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:37.269 12:14:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:37.269 12:14:50 -- nvmf/common.sh@116 -- # sync 00:20:37.269 12:14:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:37.269 12:14:50 -- nvmf/common.sh@119 -- # set +e 00:20:37.269 12:14:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:37.269 12:14:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:37.530 rmmod nvme_tcp 00:20:37.530 rmmod nvme_fabrics 00:20:37.530 rmmod nvme_keyring 00:20:37.530 12:14:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:37.530 12:14:50 -- nvmf/common.sh@123 -- # set -e 00:20:37.530 12:14:50 -- nvmf/common.sh@124 -- # return 0 00:20:37.530 12:14:50 -- nvmf/common.sh@477 -- # '[' -n 1509108 ']' 00:20:37.530 12:14:50 -- nvmf/common.sh@478 -- # killprocess 1509108 00:20:37.530 12:14:50 -- common/autotest_common.sh@926 -- # '[' -z 1509108 ']' 00:20:37.530 12:14:50 -- common/autotest_common.sh@930 -- # kill -0 1509108 00:20:37.530 12:14:50 -- common/autotest_common.sh@931 -- # uname 00:20:37.530 12:14:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.530 12:14:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1509108 00:20:37.530 12:14:50 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:37.530 12:14:50 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:37.530 12:14:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1509108' 00:20:37.530 killing process with pid 1509108 00:20:37.530 12:14:50 -- common/autotest_common.sh@945 -- # kill 1509108 00:20:37.530 12:14:50 -- common/autotest_common.sh@950 -- # wait 1509108 00:20:37.792 12:14:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:37.792 12:14:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:37.792 12:14:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:37.792 12:14:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.792 12:14:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:37.792 12:14:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.792 12:14:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.792 12:14:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.705 12:14:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:39.705 00:20:39.705 real 0m11.518s 00:20:39.705 user 0m11.863s 00:20:39.705 sys 0m5.785s 00:20:39.705 12:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.705 12:14:52 -- common/autotest_common.sh@10 -- # set +x 00:20:39.705 ************************************ 00:20:39.705 END TEST nvmf_bdevio 00:20:39.705 ************************************ 00:20:39.705 12:14:52 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:20:39.705 12:14:52 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:39.705 12:14:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:39.705 12:14:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:39.705 12:14:52 -- common/autotest_common.sh@10 -- # set +x 00:20:39.705 ************************************ 00:20:39.705 START TEST nvmf_bdevio_no_huge 00:20:39.705 ************************************ 00:20:39.705 12:14:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:39.966 * Looking for test storage... 00:20:39.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.966 12:14:52 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.966 12:14:52 -- nvmf/common.sh@7 -- # uname -s 00:20:39.966 12:14:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.966 12:14:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.966 12:14:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.966 12:14:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.966 12:14:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.966 12:14:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.966 12:14:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.966 12:14:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.966 12:14:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.966 12:14:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.966 12:14:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:39.966 12:14:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:39.966 12:14:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.966 12:14:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.966 12:14:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.966 12:14:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.966 12:14:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.966 12:14:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.966 12:14:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.966 12:14:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.966 12:14:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.966 12:14:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.966 12:14:52 -- paths/export.sh@5 -- # export PATH 00:20:39.966 12:14:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.966 12:14:52 -- nvmf/common.sh@46 -- # : 0 00:20:39.966 12:14:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:39.966 12:14:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:39.966 12:14:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:39.966 12:14:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.966 12:14:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.966 12:14:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:39.966 12:14:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:39.966 12:14:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:39.966 12:14:52 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.966 12:14:52 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.966 12:14:52 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:39.967 12:14:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:39.967 12:14:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.967 12:14:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:39.967 12:14:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:39.967 12:14:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:39.967 12:14:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.967 12:14:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.967 12:14:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.967 12:14:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:39.967 12:14:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:39.967 12:14:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:39.967 12:14:52 -- common/autotest_common.sh@10 -- # set +x 00:20:46.555 12:14:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:46.555 12:14:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:46.555 12:14:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:46.555 12:14:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:46.555 12:14:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:46.555 12:14:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:46.555 12:14:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:46.555 12:14:59 -- nvmf/common.sh@294 -- # net_devs=() 00:20:46.555 12:14:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:46.555 12:14:59 -- nvmf/common.sh@295 -- # e810=() 00:20:46.555 12:14:59 -- nvmf/common.sh@295 -- # local -ga e810 00:20:46.555 12:14:59 -- nvmf/common.sh@296 -- # x722=() 00:20:46.555 12:14:59 -- nvmf/common.sh@296 -- # local -ga x722 00:20:46.555 12:14:59 -- nvmf/common.sh@297 -- # mlx=() 00:20:46.555 12:14:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:46.555 12:14:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.555 12:14:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:46.555 12:14:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:46.555 12:14:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:46.555 12:14:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:46.555 12:14:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:46.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:46.555 12:14:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:46.555 12:14:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:46.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:46.555 12:14:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:46.555 12:14:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:46.555 12:14:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.555 12:14:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:46.555 12:14:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.555 12:14:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:46.555 Found net devices under 0000:31:00.0: cvl_0_0 00:20:46.555 12:14:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.555 12:14:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:46.555 12:14:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.555 12:14:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:46.555 12:14:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.555 12:14:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:46.555 Found net devices under 0000:31:00.1: cvl_0_1 00:20:46.555 12:14:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.555 12:14:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:46.555 12:14:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:46.555 12:14:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:46.555 12:14:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:46.555 12:14:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.555 12:14:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.555 12:14:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.555 12:14:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:46.555 12:14:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.555 12:14:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.555 12:14:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:46.555 12:14:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.555 12:14:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.555 12:14:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:46.555 12:14:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:46.555 12:14:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.555 12:14:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.555 12:14:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.555 12:14:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.555 12:14:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:46.555 12:14:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.817 12:14:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.817 12:14:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.817 12:14:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:46.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:20:46.817 00:20:46.817 --- 10.0.0.2 ping statistics --- 00:20:46.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.817 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:20:46.817 12:14:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:20:46.817 00:20:46.817 --- 10.0.0.1 ping statistics --- 00:20:46.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.817 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:20:46.817 12:14:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.817 12:14:59 -- nvmf/common.sh@410 -- # return 0 00:20:46.817 12:14:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:46.817 12:14:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.817 12:14:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:46.817 12:14:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:46.817 12:14:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.817 12:14:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:46.818 12:14:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:46.818 12:14:59 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:46.818 12:14:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:46.818 12:14:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:46.818 12:14:59 -- common/autotest_common.sh@10 -- # set +x 00:20:46.818 12:14:59 -- nvmf/common.sh@469 -- # nvmfpid=1513708 00:20:46.818 12:14:59 -- nvmf/common.sh@470 -- # waitforlisten 1513708 00:20:46.818 12:14:59 -- common/autotest_common.sh@819 -- # '[' -z 1513708 ']' 00:20:46.818 12:14:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.818 12:14:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:46.818 12:14:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.818 12:14:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:46.818 12:14:59 -- common/autotest_common.sh@10 -- # set +x 00:20:46.818 12:14:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:46.818 [2024-06-11 12:14:59.732184] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:46.818 [2024-06-11 12:14:59.732242] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:46.818 [2024-06-11 12:14:59.818974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.079 [2024-06-11 12:14:59.893723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:47.079 [2024-06-11 12:14:59.893866] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.079 [2024-06-11 12:14:59.893876] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.079 [2024-06-11 12:14:59.893883] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.079 [2024-06-11 12:14:59.894067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:47.079 [2024-06-11 12:14:59.894242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:47.079 [2024-06-11 12:14:59.894402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.079 [2024-06-11 12:14:59.894402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:47.652 12:15:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:47.652 12:15:00 -- common/autotest_common.sh@852 -- # return 0 00:20:47.652 12:15:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:47.652 12:15:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:47.652 12:15:00 -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 12:15:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.652 12:15:00 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.652 12:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:47.652 12:15:00 -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 [2024-06-11 12:15:00.555381] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.652 12:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:47.652 12:15:00 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:47.652 12:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:47.652 12:15:00 -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 Malloc0 00:20:47.652 12:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:47.652 12:15:00 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.652 12:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:47.652 12:15:00 -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 12:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:47.652 12:15:00 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.652 12:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:47.652 12:15:00 -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 12:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:47.652 12:15:00 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.652 12:15:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:47.652 12:15:00 -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 [2024-06-11 12:15:00.609092] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.652 12:15:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:47.652 12:15:00 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:47.652 12:15:00 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:47.652 12:15:00 -- nvmf/common.sh@520 -- # config=() 00:20:47.652 12:15:00 -- nvmf/common.sh@520 -- # local subsystem config 00:20:47.652 12:15:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:47.652 12:15:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:47.652 { 00:20:47.652 "params": { 00:20:47.652 "name": "Nvme$subsystem", 00:20:47.652 "trtype": "$TEST_TRANSPORT", 00:20:47.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.652 "adrfam": "ipv4", 00:20:47.652 "trsvcid": "$NVMF_PORT", 00:20:47.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.652 "hdgst": ${hdgst:-false}, 00:20:47.652 "ddgst": ${ddgst:-false} 00:20:47.652 }, 00:20:47.652 "method": "bdev_nvme_attach_controller" 00:20:47.652 } 00:20:47.652 EOF 00:20:47.652 )") 00:20:47.652 12:15:00 -- nvmf/common.sh@542 -- # cat 00:20:47.652 12:15:00 -- nvmf/common.sh@544 -- # jq . 00:20:47.652 12:15:00 -- nvmf/common.sh@545 -- # IFS=, 00:20:47.652 12:15:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:47.652 "params": { 00:20:47.652 "name": "Nvme1", 00:20:47.652 "trtype": "tcp", 00:20:47.652 "traddr": "10.0.0.2", 00:20:47.652 "adrfam": "ipv4", 00:20:47.652 "trsvcid": "4420", 00:20:47.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.652 "hdgst": false, 00:20:47.652 "ddgst": false 00:20:47.652 }, 00:20:47.652 "method": "bdev_nvme_attach_controller" 00:20:47.652 }' 00:20:47.652 [2024-06-11 12:15:00.660744] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:47.652 [2024-06-11 12:15:00.660816] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1513864 ] 00:20:47.914 [2024-06-11 12:15:00.729712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:47.914 [2024-06-11 12:15:00.798431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.914 [2024-06-11 12:15:00.798559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.914 [2024-06-11 12:15:00.798556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.179 [2024-06-11 12:15:01.018448] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:48.179 [2024-06-11 12:15:01.018473] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:48.179 I/O targets: 00:20:48.179 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:48.179 00:20:48.179 00:20:48.179 CUnit - A unit testing framework for C - Version 2.1-3 00:20:48.179 http://cunit.sourceforge.net/ 00:20:48.179 00:20:48.179 00:20:48.179 Suite: bdevio tests on: Nvme1n1 00:20:48.179 Test: blockdev write read block ...passed 00:20:48.179 Test: blockdev write zeroes read block ...passed 00:20:48.179 Test: blockdev write zeroes read no split ...passed 00:20:48.179 Test: blockdev write zeroes read split ...passed 00:20:48.179 Test: blockdev write zeroes read split partial ...passed 00:20:48.179 Test: blockdev reset ...[2024-06-11 12:15:01.186581] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:48.179 [2024-06-11 12:15:01.186631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100a1a0 (9): Bad file descriptor 00:20:48.179 [2024-06-11 12:15:01.208334] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:48.179 passed 00:20:48.444 Test: blockdev write read 8 blocks ...passed 00:20:48.444 Test: blockdev write read size > 128k ...passed 00:20:48.444 Test: blockdev write read invalid size ...passed 00:20:48.444 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:48.444 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:48.444 Test: blockdev write read max offset ...passed 00:20:48.444 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:48.444 Test: blockdev writev readv 8 blocks ...passed 00:20:48.444 Test: blockdev writev readv 30 x 1block ...passed 00:20:48.444 Test: blockdev writev readv block ...passed 00:20:48.444 Test: blockdev writev readv size > 128k ...passed 00:20:48.444 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:48.444 Test: blockdev comparev and writev ...[2024-06-11 12:15:01.431860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.431883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:48.444 [2024-06-11 12:15:01.431894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.431900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.444 [2024-06-11 12:15:01.432280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.432288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:48.444 [2024-06-11 12:15:01.432297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.432302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:48.444 [2024-06-11 12:15:01.432668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.432675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:48.444 [2024-06-11 12:15:01.432684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.432689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:48.444 [2024-06-11 12:15:01.433022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.433030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:48.444 [2024-06-11 12:15:01.433039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:48.444 [2024-06-11 12:15:01.433044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:48.444 passed 00:20:48.756 Test: blockdev nvme passthru rw ...passed 00:20:48.756 Test: blockdev nvme passthru vendor specific ...[2024-06-11 12:15:01.518865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.756 [2024-06-11 12:15:01.518875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:48.756 [2024-06-11 12:15:01.519270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.756 [2024-06-11 12:15:01.519277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:48.756 [2024-06-11 12:15:01.519614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.756 [2024-06-11 12:15:01.519621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:48.756 [2024-06-11 12:15:01.519949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.756 [2024-06-11 12:15:01.519956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:48.756 passed 00:20:48.756 Test: blockdev nvme admin passthru ...passed 00:20:48.756 Test: blockdev copy ...passed 00:20:48.756 00:20:48.756 Run Summary: Type Total Ran Passed Failed Inactive 00:20:48.756 suites 1 1 n/a 0 0 00:20:48.756 tests 23 23 23 0 0 00:20:48.756 asserts 152 152 152 0 n/a 00:20:48.756 00:20:48.756 Elapsed time = 1.134 seconds 00:20:49.028 12:15:01 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.028 12:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.028 12:15:01 -- common/autotest_common.sh@10 -- # set +x 00:20:49.028 12:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.028 12:15:01 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:49.028 12:15:01 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:49.028 12:15:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:49.028 12:15:01 -- nvmf/common.sh@116 -- # sync 00:20:49.028 12:15:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:49.028 12:15:01 -- nvmf/common.sh@119 -- # set +e 00:20:49.028 12:15:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:49.028 12:15:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:49.028 rmmod nvme_tcp 00:20:49.028 rmmod nvme_fabrics 00:20:49.028 rmmod nvme_keyring 00:20:49.028 12:15:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:49.028 12:15:01 -- nvmf/common.sh@123 -- # set -e 00:20:49.028 12:15:01 -- nvmf/common.sh@124 -- # return 0 00:20:49.028 12:15:01 -- nvmf/common.sh@477 -- # '[' -n 1513708 ']' 00:20:49.028 12:15:01 -- nvmf/common.sh@478 -- # killprocess 1513708 00:20:49.028 12:15:01 -- common/autotest_common.sh@926 -- # '[' -z 1513708 ']' 00:20:49.028 12:15:01 -- common/autotest_common.sh@930 -- # kill -0 1513708 00:20:49.028 12:15:01 -- common/autotest_common.sh@931 -- # uname 00:20:49.028 12:15:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:49.028 12:15:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1513708 00:20:49.028 12:15:01 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:49.028 12:15:01 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:49.028 12:15:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1513708' 00:20:49.028 killing process with pid 1513708 00:20:49.028 12:15:01 -- common/autotest_common.sh@945 -- # kill 1513708 00:20:49.028 12:15:01 -- common/autotest_common.sh@950 -- # wait 1513708 00:20:49.290 12:15:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:49.290 12:15:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:49.290 12:15:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:49.290 12:15:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.290 12:15:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:49.290 12:15:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.290 12:15:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.290 12:15:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.835 12:15:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:51.835 00:20:51.835 real 0m11.575s 00:20:51.835 user 0m13.122s 00:20:51.835 sys 0m5.990s 00:20:51.835 12:15:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.835 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:20:51.835 ************************************ 00:20:51.835 END TEST nvmf_bdevio_no_huge 00:20:51.835 ************************************ 00:20:51.835 12:15:04 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:51.835 12:15:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:51.835 12:15:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:51.835 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:20:51.835 ************************************ 00:20:51.835 START TEST nvmf_tls 00:20:51.835 ************************************ 00:20:51.835 12:15:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:51.835 * Looking for test storage... 00:20:51.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.835 12:15:04 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.835 12:15:04 -- nvmf/common.sh@7 -- # uname -s 00:20:51.835 12:15:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.835 12:15:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.835 12:15:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.835 12:15:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.835 12:15:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.835 12:15:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.835 12:15:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.835 12:15:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.835 12:15:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.835 12:15:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.835 12:15:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:51.835 12:15:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:51.835 12:15:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.835 12:15:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.835 12:15:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.835 12:15:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.835 12:15:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.835 12:15:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.835 12:15:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.835 12:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.835 12:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.835 12:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.835 12:15:04 -- paths/export.sh@5 -- # export PATH 00:20:51.835 12:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.835 12:15:04 -- nvmf/common.sh@46 -- # : 0 00:20:51.835 12:15:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:51.835 12:15:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:51.835 12:15:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:51.835 12:15:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.835 12:15:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.835 12:15:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:51.835 12:15:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:51.835 12:15:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:51.835 12:15:04 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.835 12:15:04 -- target/tls.sh@71 -- # nvmftestinit 00:20:51.835 12:15:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:51.835 12:15:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.835 12:15:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:51.835 12:15:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:51.835 12:15:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:51.835 12:15:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.835 12:15:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.835 12:15:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.835 12:15:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:51.835 12:15:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:51.835 12:15:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:51.835 12:15:04 -- common/autotest_common.sh@10 -- # set +x 00:20:58.422 12:15:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:58.422 12:15:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:58.422 12:15:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:58.422 12:15:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:58.422 12:15:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:58.422 12:15:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:58.422 12:15:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:58.422 12:15:11 -- nvmf/common.sh@294 -- # net_devs=() 00:20:58.422 12:15:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:58.422 12:15:11 -- nvmf/common.sh@295 -- # e810=() 00:20:58.422 12:15:11 -- nvmf/common.sh@295 -- # local -ga e810 00:20:58.422 12:15:11 -- nvmf/common.sh@296 -- # x722=() 00:20:58.422 12:15:11 -- nvmf/common.sh@296 -- # local -ga x722 00:20:58.422 12:15:11 -- nvmf/common.sh@297 -- # mlx=() 00:20:58.422 12:15:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:58.422 12:15:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.422 12:15:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:58.422 12:15:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:58.422 12:15:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:58.422 12:15:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:58.422 12:15:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:58.422 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:58.422 12:15:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:58.422 12:15:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:58.422 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:58.422 12:15:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:58.422 12:15:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:58.422 12:15:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.422 12:15:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:58.422 12:15:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.422 12:15:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:58.422 Found net devices under 0000:31:00.0: cvl_0_0 00:20:58.422 12:15:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.422 12:15:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:58.422 12:15:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.422 12:15:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:58.422 12:15:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.422 12:15:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:58.422 Found net devices under 0000:31:00.1: cvl_0_1 00:20:58.422 12:15:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.422 12:15:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:58.422 12:15:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:58.422 12:15:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:58.422 12:15:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:58.422 12:15:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.422 12:15:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.422 12:15:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.422 12:15:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:58.422 12:15:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.422 12:15:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.422 12:15:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:58.422 12:15:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.422 12:15:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.422 12:15:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:58.422 12:15:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:58.422 12:15:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.422 12:15:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.422 12:15:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.422 12:15:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.422 12:15:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:58.422 12:15:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.684 12:15:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.684 12:15:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.684 12:15:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:58.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:20:58.684 00:20:58.684 --- 10.0.0.2 ping statistics --- 00:20:58.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.684 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:20:58.684 12:15:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:20:58.684 00:20:58.684 --- 10.0.0.1 ping statistics --- 00:20:58.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.684 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:20:58.684 12:15:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.684 12:15:11 -- nvmf/common.sh@410 -- # return 0 00:20:58.684 12:15:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:58.684 12:15:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.684 12:15:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:58.684 12:15:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:58.684 12:15:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.684 12:15:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:58.684 12:15:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:58.684 12:15:11 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:58.684 12:15:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:58.684 12:15:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:58.684 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:20:58.684 12:15:11 -- nvmf/common.sh@469 -- # nvmfpid=1518952 00:20:58.684 12:15:11 -- nvmf/common.sh@470 -- # waitforlisten 1518952 00:20:58.684 12:15:11 -- common/autotest_common.sh@819 -- # '[' -z 1518952 ']' 00:20:58.684 12:15:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.684 12:15:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:58.684 12:15:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.684 12:15:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:58.684 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:20:58.684 12:15:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:58.684 [2024-06-11 12:15:11.633780] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:58.684 [2024-06-11 12:15:11.633843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.684 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.946 [2024-06-11 12:15:11.723929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.946 [2024-06-11 12:15:11.768672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:58.946 [2024-06-11 12:15:11.768821] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.946 [2024-06-11 12:15:11.768831] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.946 [2024-06-11 12:15:11.768839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.946 [2024-06-11 12:15:11.768862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.518 12:15:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:59.518 12:15:12 -- common/autotest_common.sh@852 -- # return 0 00:20:59.518 12:15:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:59.518 12:15:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:59.518 12:15:12 -- common/autotest_common.sh@10 -- # set +x 00:20:59.518 12:15:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.518 12:15:12 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:59.518 12:15:12 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:59.779 true 00:20:59.779 12:15:12 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:59.779 12:15:12 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:59.779 12:15:12 -- target/tls.sh@82 -- # version=0 00:20:59.779 12:15:12 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:59.779 12:15:12 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:00.041 12:15:12 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:00.041 12:15:12 -- target/tls.sh@90 -- # jq -r .tls_version 00:21:00.302 12:15:13 -- target/tls.sh@90 -- # version=13 00:21:00.302 12:15:13 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:21:00.302 12:15:13 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:00.302 12:15:13 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:00.302 12:15:13 -- target/tls.sh@98 -- # jq -r .tls_version 00:21:00.564 12:15:13 -- target/tls.sh@98 -- # version=7 00:21:00.564 12:15:13 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:21:00.564 12:15:13 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:00.564 12:15:13 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:00.564 12:15:13 -- target/tls.sh@105 -- # ktls=false 00:21:00.564 12:15:13 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:21:00.564 12:15:13 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:00.825 12:15:13 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:00.825 12:15:13 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:01.087 12:15:13 -- target/tls.sh@113 -- # ktls=true 00:21:01.087 12:15:13 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:21:01.087 12:15:13 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:01.087 12:15:14 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:01.087 12:15:14 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:21:01.348 12:15:14 -- target/tls.sh@121 -- # ktls=false 00:21:01.348 12:15:14 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:21:01.348 12:15:14 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:21:01.348 12:15:14 -- target/tls.sh@49 -- # local key hash crc 00:21:01.348 12:15:14 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:21:01.348 12:15:14 -- target/tls.sh@51 -- # hash=01 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # gzip -1 -c 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # tail -c8 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # head -c 4 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # crc='p$H�' 00:21:01.348 12:15:14 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:01.348 12:15:14 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:21:01.348 12:15:14 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:01.348 12:15:14 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:01.348 12:15:14 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:21:01.348 12:15:14 -- target/tls.sh@49 -- # local key hash crc 00:21:01.348 12:15:14 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:21:01.348 12:15:14 -- target/tls.sh@51 -- # hash=01 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # gzip -1 -c 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # tail -c8 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # head -c 4 00:21:01.348 12:15:14 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:21:01.348 12:15:14 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:01.348 12:15:14 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:21:01.348 12:15:14 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:01.348 12:15:14 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:01.348 12:15:14 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.348 12:15:14 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:01.348 12:15:14 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:01.348 12:15:14 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:01.348 12:15:14 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.348 12:15:14 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:01.348 12:15:14 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:01.609 12:15:14 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:01.609 12:15:14 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.609 12:15:14 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.609 12:15:14 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.870 [2024-06-11 12:15:14.763502] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.870 12:15:14 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.130 12:15:14 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.130 [2024-06-11 12:15:15.060226] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.130 [2024-06-11 12:15:15.060395] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.130 12:15:15 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.391 malloc0 00:21:02.391 12:15:15 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.391 12:15:15 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:02.652 12:15:15 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:02.652 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.656 Initializing NVMe Controllers 00:21:12.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.656 Initialization complete. Launching workers. 00:21:12.656 ======================================================== 00:21:12.656 Latency(us) 00:21:12.656 Device Information : IOPS MiB/s Average min max 00:21:12.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19724.98 77.05 3244.60 920.00 4199.38 00:21:12.656 ======================================================== 00:21:12.656 Total : 19724.98 77.05 3244.60 920.00 4199.38 00:21:12.656 00:21:12.656 12:15:25 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:12.656 12:15:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:12.656 12:15:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:12.656 12:15:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:12.656 12:15:25 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:12.656 12:15:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.656 12:15:25 -- target/tls.sh@28 -- # bdevperf_pid=1521822 00:21:12.656 12:15:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:12.656 12:15:25 -- target/tls.sh@31 -- # waitforlisten 1521822 /var/tmp/bdevperf.sock 00:21:12.656 12:15:25 -- common/autotest_common.sh@819 -- # '[' -z 1521822 ']' 00:21:12.656 12:15:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.656 12:15:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:12.656 12:15:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.656 12:15:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:12.656 12:15:25 -- common/autotest_common.sh@10 -- # set +x 00:21:12.656 12:15:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:12.917 [2024-06-11 12:15:25.692896] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:12.917 [2024-06-11 12:15:25.692951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521822 ] 00:21:12.917 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.917 [2024-06-11 12:15:25.743008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.917 [2024-06-11 12:15:25.769498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.489 12:15:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:13.489 12:15:26 -- common/autotest_common.sh@852 -- # return 0 00:21:13.489 12:15:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:13.750 [2024-06-11 12:15:26.565195] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.750 TLSTESTn1 00:21:13.750 12:15:26 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:13.750 Running I/O for 10 seconds... 00:21:23.752 00:21:23.752 Latency(us) 00:21:23.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.752 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:23.753 Verification LBA range: start 0x0 length 0x2000 00:21:23.753 TLSTESTn1 : 10.02 5918.41 23.12 0.00 0.00 21603.97 4150.61 51336.53 00:21:23.753 =================================================================================================================== 00:21:23.753 Total : 5918.41 23.12 0.00 0.00 21603.97 4150.61 51336.53 00:21:23.753 0 00:21:24.014 12:15:36 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.014 12:15:36 -- target/tls.sh@45 -- # killprocess 1521822 00:21:24.014 12:15:36 -- common/autotest_common.sh@926 -- # '[' -z 1521822 ']' 00:21:24.014 12:15:36 -- common/autotest_common.sh@930 -- # kill -0 1521822 00:21:24.014 12:15:36 -- common/autotest_common.sh@931 -- # uname 00:21:24.014 12:15:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.014 12:15:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1521822 00:21:24.014 12:15:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:24.014 12:15:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:24.014 12:15:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1521822' 00:21:24.014 killing process with pid 1521822 00:21:24.014 12:15:36 -- common/autotest_common.sh@945 -- # kill 1521822 00:21:24.014 Received shutdown signal, test time was about 10.000000 seconds 00:21:24.014 00:21:24.014 Latency(us) 00:21:24.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.014 =================================================================================================================== 00:21:24.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.014 12:15:36 -- common/autotest_common.sh@950 -- # wait 1521822 00:21:24.014 12:15:36 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:24.014 12:15:36 -- common/autotest_common.sh@640 -- # local es=0 00:21:24.014 12:15:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:24.014 12:15:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:24.014 12:15:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:24.014 12:15:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:24.014 12:15:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:24.014 12:15:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:24.014 12:15:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:24.014 12:15:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:24.014 12:15:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:24.014 12:15:36 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:21:24.014 12:15:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.014 12:15:36 -- target/tls.sh@28 -- # bdevperf_pid=1523872 00:21:24.014 12:15:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.014 12:15:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.014 12:15:36 -- target/tls.sh@31 -- # waitforlisten 1523872 /var/tmp/bdevperf.sock 00:21:24.014 12:15:36 -- common/autotest_common.sh@819 -- # '[' -z 1523872 ']' 00:21:24.014 12:15:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.014 12:15:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:24.014 12:15:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.014 12:15:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:24.014 12:15:36 -- common/autotest_common.sh@10 -- # set +x 00:21:24.014 [2024-06-11 12:15:37.001004] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:24.014 [2024-06-11 12:15:37.001068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523872 ] 00:21:24.014 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.275 [2024-06-11 12:15:37.051837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.275 [2024-06-11 12:15:37.078164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.847 12:15:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:24.847 12:15:37 -- common/autotest_common.sh@852 -- # return 0 00:21:24.847 12:15:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:25.108 [2024-06-11 12:15:37.893945] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.108 [2024-06-11 12:15:37.899245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:25.108 [2024-06-11 12:15:37.899916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179ed00 (107): Transport endpoint is not connected 00:21:25.108 [2024-06-11 12:15:37.900912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179ed00 (9): Bad file descriptor 00:21:25.108 [2024-06-11 12:15:37.901913] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.108 [2024-06-11 12:15:37.901920] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:25.108 [2024-06-11 12:15:37.901926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.108 request: 00:21:25.108 { 00:21:25.108 "name": "TLSTEST", 00:21:25.108 "trtype": "tcp", 00:21:25.108 "traddr": "10.0.0.2", 00:21:25.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.108 "adrfam": "ipv4", 00:21:25.108 "trsvcid": "4420", 00:21:25.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.108 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:21:25.108 "method": "bdev_nvme_attach_controller", 00:21:25.108 "req_id": 1 00:21:25.108 } 00:21:25.108 Got JSON-RPC error response 00:21:25.108 response: 00:21:25.108 { 00:21:25.108 "code": -32602, 00:21:25.108 "message": "Invalid parameters" 00:21:25.108 } 00:21:25.108 12:15:37 -- target/tls.sh@36 -- # killprocess 1523872 00:21:25.108 12:15:37 -- common/autotest_common.sh@926 -- # '[' -z 1523872 ']' 00:21:25.108 12:15:37 -- common/autotest_common.sh@930 -- # kill -0 1523872 00:21:25.108 12:15:37 -- common/autotest_common.sh@931 -- # uname 00:21:25.108 12:15:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:25.109 12:15:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1523872 00:21:25.109 12:15:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:25.109 12:15:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:25.109 12:15:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1523872' 00:21:25.109 killing process with pid 1523872 00:21:25.109 12:15:37 -- common/autotest_common.sh@945 -- # kill 1523872 00:21:25.109 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.109 00:21:25.109 Latency(us) 00:21:25.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.109 =================================================================================================================== 00:21:25.109 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.109 12:15:37 -- common/autotest_common.sh@950 -- # wait 1523872 00:21:25.109 12:15:38 -- target/tls.sh@37 -- # return 1 00:21:25.109 12:15:38 -- common/autotest_common.sh@643 -- # es=1 00:21:25.109 12:15:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:25.109 12:15:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:25.109 12:15:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:25.109 12:15:38 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:25.109 12:15:38 -- common/autotest_common.sh@640 -- # local es=0 00:21:25.109 12:15:38 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:25.109 12:15:38 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:25.109 12:15:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:25.109 12:15:38 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:25.109 12:15:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:25.109 12:15:38 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:25.109 12:15:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:25.109 12:15:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:25.109 12:15:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:25.109 12:15:38 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:25.109 12:15:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.109 12:15:38 -- target/tls.sh@28 -- # bdevperf_pid=1524204 00:21:25.109 12:15:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:25.109 12:15:38 -- target/tls.sh@31 -- # waitforlisten 1524204 /var/tmp/bdevperf.sock 00:21:25.109 12:15:38 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:25.109 12:15:38 -- common/autotest_common.sh@819 -- # '[' -z 1524204 ']' 00:21:25.109 12:15:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.109 12:15:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:25.109 12:15:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.109 12:15:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:25.109 12:15:38 -- common/autotest_common.sh@10 -- # set +x 00:21:25.109 [2024-06-11 12:15:38.135399] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:25.109 [2024-06-11 12:15:38.135451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524204 ] 00:21:25.370 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.370 [2024-06-11 12:15:38.186567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.370 [2024-06-11 12:15:38.211656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.941 12:15:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:25.941 12:15:38 -- common/autotest_common.sh@852 -- # return 0 00:21:25.941 12:15:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:26.201 [2024-06-11 12:15:39.035301] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.201 [2024-06-11 12:15:39.041928] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:26.201 [2024-06-11 12:15:39.041945] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:26.201 [2024-06-11 12:15:39.041965] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:26.201 [2024-06-11 12:15:39.042443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x803d00 (107): Transport endpoint is not connected 00:21:26.201 [2024-06-11 12:15:39.043439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x803d00 (9): Bad file descriptor 00:21:26.201 [2024-06-11 12:15:39.044441] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.201 [2024-06-11 12:15:39.044447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:26.201 [2024-06-11 12:15:39.044453] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.201 request: 00:21:26.201 { 00:21:26.201 "name": "TLSTEST", 00:21:26.201 "trtype": "tcp", 00:21:26.201 "traddr": "10.0.0.2", 00:21:26.201 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:26.201 "adrfam": "ipv4", 00:21:26.201 "trsvcid": "4420", 00:21:26.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.201 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:26.201 "method": "bdev_nvme_attach_controller", 00:21:26.201 "req_id": 1 00:21:26.201 } 00:21:26.201 Got JSON-RPC error response 00:21:26.201 response: 00:21:26.201 { 00:21:26.201 "code": -32602, 00:21:26.201 "message": "Invalid parameters" 00:21:26.201 } 00:21:26.201 12:15:39 -- target/tls.sh@36 -- # killprocess 1524204 00:21:26.201 12:15:39 -- common/autotest_common.sh@926 -- # '[' -z 1524204 ']' 00:21:26.201 12:15:39 -- common/autotest_common.sh@930 -- # kill -0 1524204 00:21:26.201 12:15:39 -- common/autotest_common.sh@931 -- # uname 00:21:26.201 12:15:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:26.201 12:15:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1524204 00:21:26.201 12:15:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:26.201 12:15:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:26.201 12:15:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1524204' 00:21:26.201 killing process with pid 1524204 00:21:26.201 12:15:39 -- common/autotest_common.sh@945 -- # kill 1524204 00:21:26.201 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.201 00:21:26.201 Latency(us) 00:21:26.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.201 =================================================================================================================== 00:21:26.201 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:26.201 12:15:39 -- common/autotest_common.sh@950 -- # wait 1524204 00:21:26.201 12:15:39 -- target/tls.sh@37 -- # return 1 00:21:26.201 12:15:39 -- common/autotest_common.sh@643 -- # es=1 00:21:26.201 12:15:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:26.201 12:15:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:26.201 12:15:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:26.201 12:15:39 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:26.201 12:15:39 -- common/autotest_common.sh@640 -- # local es=0 00:21:26.201 12:15:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:26.201 12:15:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:26.201 12:15:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:26.201 12:15:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:26.201 12:15:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:26.201 12:15:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:26.201 12:15:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:26.201 12:15:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:26.201 12:15:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:26.201 12:15:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:26.201 12:15:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.201 12:15:39 -- target/tls.sh@28 -- # bdevperf_pid=1524465 00:21:26.201 12:15:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.201 12:15:39 -- target/tls.sh@31 -- # waitforlisten 1524465 /var/tmp/bdevperf.sock 00:21:26.201 12:15:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.201 12:15:39 -- common/autotest_common.sh@819 -- # '[' -z 1524465 ']' 00:21:26.201 12:15:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.201 12:15:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:26.201 12:15:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.201 12:15:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:26.462 12:15:39 -- common/autotest_common.sh@10 -- # set +x 00:21:26.462 [2024-06-11 12:15:39.276978] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:26.462 [2024-06-11 12:15:39.277037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524465 ] 00:21:26.462 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.462 [2024-06-11 12:15:39.327859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.462 [2024-06-11 12:15:39.353805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.034 12:15:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:27.034 12:15:40 -- common/autotest_common.sh@852 -- # return 0 00:21:27.034 12:15:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:27.295 [2024-06-11 12:15:40.181697] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.295 [2024-06-11 12:15:40.186080] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:27.295 [2024-06-11 12:15:40.186099] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:27.295 [2024-06-11 12:15:40.186119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:27.295 [2024-06-11 12:15:40.186785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fddd00 (107): Transport endpoint is not connected 00:21:27.295 [2024-06-11 12:15:40.187779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fddd00 (9): Bad file descriptor 00:21:27.295 [2024-06-11 12:15:40.188781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:27.295 [2024-06-11 12:15:40.188788] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:27.295 [2024-06-11 12:15:40.188795] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:27.295 request: 00:21:27.295 { 00:21:27.295 "name": "TLSTEST", 00:21:27.295 "trtype": "tcp", 00:21:27.295 "traddr": "10.0.0.2", 00:21:27.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.295 "adrfam": "ipv4", 00:21:27.295 "trsvcid": "4420", 00:21:27.295 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.295 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:27.295 "method": "bdev_nvme_attach_controller", 00:21:27.295 "req_id": 1 00:21:27.295 } 00:21:27.295 Got JSON-RPC error response 00:21:27.295 response: 00:21:27.295 { 00:21:27.295 "code": -32602, 00:21:27.295 "message": "Invalid parameters" 00:21:27.295 } 00:21:27.295 12:15:40 -- target/tls.sh@36 -- # killprocess 1524465 00:21:27.295 12:15:40 -- common/autotest_common.sh@926 -- # '[' -z 1524465 ']' 00:21:27.295 12:15:40 -- common/autotest_common.sh@930 -- # kill -0 1524465 00:21:27.295 12:15:40 -- common/autotest_common.sh@931 -- # uname 00:21:27.295 12:15:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:27.295 12:15:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1524465 00:21:27.295 12:15:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:27.295 12:15:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:27.295 12:15:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1524465' 00:21:27.295 killing process with pid 1524465 00:21:27.295 12:15:40 -- common/autotest_common.sh@945 -- # kill 1524465 00:21:27.295 Received shutdown signal, test time was about 10.000000 seconds 00:21:27.295 00:21:27.295 Latency(us) 00:21:27.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.295 =================================================================================================================== 00:21:27.295 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:27.295 12:15:40 -- common/autotest_common.sh@950 -- # wait 1524465 00:21:27.574 12:15:40 -- target/tls.sh@37 -- # return 1 00:21:27.574 12:15:40 -- common/autotest_common.sh@643 -- # es=1 00:21:27.574 12:15:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:27.574 12:15:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:27.574 12:15:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:27.574 12:15:40 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:27.574 12:15:40 -- common/autotest_common.sh@640 -- # local es=0 00:21:27.574 12:15:40 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:27.574 12:15:40 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:27.574 12:15:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:27.574 12:15:40 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:27.574 12:15:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:27.574 12:15:40 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:27.574 12:15:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:27.574 12:15:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:27.574 12:15:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:27.574 12:15:40 -- target/tls.sh@23 -- # psk= 00:21:27.574 12:15:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.574 12:15:40 -- target/tls.sh@28 -- # bdevperf_pid=1524570 00:21:27.574 12:15:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.574 12:15:40 -- target/tls.sh@31 -- # waitforlisten 1524570 /var/tmp/bdevperf.sock 00:21:27.574 12:15:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.574 12:15:40 -- common/autotest_common.sh@819 -- # '[' -z 1524570 ']' 00:21:27.574 12:15:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.574 12:15:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:27.574 12:15:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.574 12:15:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:27.574 12:15:40 -- common/autotest_common.sh@10 -- # set +x 00:21:27.574 [2024-06-11 12:15:40.427777] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:27.574 [2024-06-11 12:15:40.427837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524570 ] 00:21:27.574 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.574 [2024-06-11 12:15:40.480237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.574 [2024-06-11 12:15:40.504900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.189 12:15:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:28.189 12:15:41 -- common/autotest_common.sh@852 -- # return 0 00:21:28.189 12:15:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:28.449 [2024-06-11 12:15:41.317523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:28.449 [2024-06-11 12:15:41.318960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014330 (9): Bad file descriptor 00:21:28.449 [2024-06-11 12:15:41.319959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:28.449 [2024-06-11 12:15:41.319965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:28.449 [2024-06-11 12:15:41.319971] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:28.449 request: 00:21:28.449 { 00:21:28.449 "name": "TLSTEST", 00:21:28.449 "trtype": "tcp", 00:21:28.449 "traddr": "10.0.0.2", 00:21:28.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.449 "adrfam": "ipv4", 00:21:28.449 "trsvcid": "4420", 00:21:28.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.449 "method": "bdev_nvme_attach_controller", 00:21:28.449 "req_id": 1 00:21:28.449 } 00:21:28.449 Got JSON-RPC error response 00:21:28.449 response: 00:21:28.449 { 00:21:28.449 "code": -32602, 00:21:28.449 "message": "Invalid parameters" 00:21:28.449 } 00:21:28.449 12:15:41 -- target/tls.sh@36 -- # killprocess 1524570 00:21:28.449 12:15:41 -- common/autotest_common.sh@926 -- # '[' -z 1524570 ']' 00:21:28.449 12:15:41 -- common/autotest_common.sh@930 -- # kill -0 1524570 00:21:28.449 12:15:41 -- common/autotest_common.sh@931 -- # uname 00:21:28.449 12:15:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:28.449 12:15:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1524570 00:21:28.449 12:15:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:28.450 12:15:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:28.450 12:15:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1524570' 00:21:28.450 killing process with pid 1524570 00:21:28.450 12:15:41 -- common/autotest_common.sh@945 -- # kill 1524570 00:21:28.450 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.450 00:21:28.450 Latency(us) 00:21:28.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.450 =================================================================================================================== 00:21:28.450 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.450 12:15:41 -- common/autotest_common.sh@950 -- # wait 1524570 00:21:28.710 12:15:41 -- target/tls.sh@37 -- # return 1 00:21:28.710 12:15:41 -- common/autotest_common.sh@643 -- # es=1 00:21:28.710 12:15:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:28.710 12:15:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:28.710 12:15:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:28.710 12:15:41 -- target/tls.sh@167 -- # killprocess 1518952 00:21:28.710 12:15:41 -- common/autotest_common.sh@926 -- # '[' -z 1518952 ']' 00:21:28.710 12:15:41 -- common/autotest_common.sh@930 -- # kill -0 1518952 00:21:28.710 12:15:41 -- common/autotest_common.sh@931 -- # uname 00:21:28.710 12:15:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:28.710 12:15:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1518952 00:21:28.710 12:15:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:28.710 12:15:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:28.710 12:15:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1518952' 00:21:28.710 killing process with pid 1518952 00:21:28.710 12:15:41 -- common/autotest_common.sh@945 -- # kill 1518952 00:21:28.710 12:15:41 -- common/autotest_common.sh@950 -- # wait 1518952 00:21:28.710 12:15:41 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:28.710 12:15:41 -- target/tls.sh@49 -- # local key hash crc 00:21:28.710 12:15:41 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:28.710 12:15:41 -- target/tls.sh@51 -- # hash=02 00:21:28.710 12:15:41 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:28.710 12:15:41 -- target/tls.sh@52 -- # head -c 4 00:21:28.710 12:15:41 -- target/tls.sh@52 -- # gzip -1 -c 00:21:28.710 12:15:41 -- target/tls.sh@52 -- # tail -c8 00:21:28.710 12:15:41 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:28.710 12:15:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:28.710 12:15:41 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:28.710 12:15:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:28.710 12:15:41 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:28.710 12:15:41 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:28.710 12:15:41 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:28.710 12:15:41 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:28.710 12:15:41 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:28.710 12:15:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:28.710 12:15:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:28.710 12:15:41 -- common/autotest_common.sh@10 -- # set +x 00:21:28.710 12:15:41 -- nvmf/common.sh@469 -- # nvmfpid=1524936 00:21:28.710 12:15:41 -- nvmf/common.sh@470 -- # waitforlisten 1524936 00:21:28.710 12:15:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.710 12:15:41 -- common/autotest_common.sh@819 -- # '[' -z 1524936 ']' 00:21:28.710 12:15:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.710 12:15:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:28.710 12:15:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.710 12:15:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:28.710 12:15:41 -- common/autotest_common.sh@10 -- # set +x 00:21:28.970 [2024-06-11 12:15:41.757165] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:28.970 [2024-06-11 12:15:41.757223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.970 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.970 [2024-06-11 12:15:41.838186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.970 [2024-06-11 12:15:41.865890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:28.970 [2024-06-11 12:15:41.865991] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.970 [2024-06-11 12:15:41.865997] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.970 [2024-06-11 12:15:41.866001] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.970 [2024-06-11 12:15:41.866025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.540 12:15:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:29.540 12:15:42 -- common/autotest_common.sh@852 -- # return 0 00:21:29.540 12:15:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:29.540 12:15:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:29.540 12:15:42 -- common/autotest_common.sh@10 -- # set +x 00:21:29.540 12:15:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.540 12:15:42 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:29.540 12:15:42 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:29.540 12:15:42 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:29.799 [2024-06-11 12:15:42.679626] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.799 12:15:42 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:30.059 12:15:42 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:30.059 [2024-06-11 12:15:42.976352] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.059 [2024-06-11 12:15:42.976529] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.059 12:15:42 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:30.319 malloc0 00:21:30.319 12:15:43 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:30.319 12:15:43 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:30.579 12:15:43 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:30.579 12:15:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:30.579 12:15:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:30.579 12:15:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:30.579 12:15:43 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:30.579 12:15:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.579 12:15:43 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.579 12:15:43 -- target/tls.sh@28 -- # bdevperf_pid=1525296 00:21:30.579 12:15:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.579 12:15:43 -- target/tls.sh@31 -- # waitforlisten 1525296 /var/tmp/bdevperf.sock 00:21:30.579 12:15:43 -- common/autotest_common.sh@819 -- # '[' -z 1525296 ']' 00:21:30.579 12:15:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.579 12:15:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:30.579 12:15:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.579 12:15:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:30.579 12:15:43 -- common/autotest_common.sh@10 -- # set +x 00:21:30.579 [2024-06-11 12:15:43.466968] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:30.579 [2024-06-11 12:15:43.467022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525296 ] 00:21:30.579 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.579 [2024-06-11 12:15:43.517182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.579 [2024-06-11 12:15:43.543585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.520 12:15:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:31.520 12:15:44 -- common/autotest_common.sh@852 -- # return 0 00:21:31.520 12:15:44 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:31.520 [2024-06-11 12:15:44.359340] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.520 TLSTESTn1 00:21:31.520 12:15:44 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.520 Running I/O for 10 seconds... 00:21:43.751 00:21:43.751 Latency(us) 00:21:43.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:43.751 Verification LBA range: start 0x0 length 0x2000 00:21:43.751 TLSTESTn1 : 10.01 6878.55 26.87 0.00 0.00 18591.40 2949.12 46967.47 00:21:43.751 =================================================================================================================== 00:21:43.751 Total : 6878.55 26.87 0.00 0.00 18591.40 2949.12 46967.47 00:21:43.751 0 00:21:43.751 12:15:54 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.751 12:15:54 -- target/tls.sh@45 -- # killprocess 1525296 00:21:43.751 12:15:54 -- common/autotest_common.sh@926 -- # '[' -z 1525296 ']' 00:21:43.751 12:15:54 -- common/autotest_common.sh@930 -- # kill -0 1525296 00:21:43.751 12:15:54 -- common/autotest_common.sh@931 -- # uname 00:21:43.751 12:15:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.751 12:15:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1525296 00:21:43.751 12:15:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:43.751 12:15:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:43.751 12:15:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1525296' 00:21:43.751 killing process with pid 1525296 00:21:43.751 12:15:54 -- common/autotest_common.sh@945 -- # kill 1525296 00:21:43.751 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.751 00:21:43.751 Latency(us) 00:21:43.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.751 =================================================================================================================== 00:21:43.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.752 12:15:54 -- common/autotest_common.sh@950 -- # wait 1525296 00:21:43.752 12:15:54 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:43.752 12:15:54 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:43.752 12:15:54 -- common/autotest_common.sh@640 -- # local es=0 00:21:43.752 12:15:54 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:43.752 12:15:54 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:43.752 12:15:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.752 12:15:54 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:43.752 12:15:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.752 12:15:54 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:43.752 12:15:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.752 12:15:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.752 12:15:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.752 12:15:54 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:43.752 12:15:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.752 12:15:54 -- target/tls.sh@28 -- # bdevperf_pid=1527558 00:21:43.752 12:15:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.752 12:15:54 -- target/tls.sh@31 -- # waitforlisten 1527558 /var/tmp/bdevperf.sock 00:21:43.752 12:15:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.752 12:15:54 -- common/autotest_common.sh@819 -- # '[' -z 1527558 ']' 00:21:43.752 12:15:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.752 12:15:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:43.752 12:15:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.752 12:15:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:43.752 12:15:54 -- common/autotest_common.sh@10 -- # set +x 00:21:43.752 [2024-06-11 12:15:54.807181] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:43.752 [2024-06-11 12:15:54.807241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527558 ] 00:21:43.752 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.752 [2024-06-11 12:15:54.857447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.752 [2024-06-11 12:15:54.883658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.752 12:15:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.752 12:15:55 -- common/autotest_common.sh@852 -- # return 0 00:21:43.752 12:15:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:43.752 [2024-06-11 12:15:55.703291] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.752 [2024-06-11 12:15:55.703320] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:43.752 request: 00:21:43.752 { 00:21:43.752 "name": "TLSTEST", 00:21:43.752 "trtype": "tcp", 00:21:43.752 "traddr": "10.0.0.2", 00:21:43.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.752 "adrfam": "ipv4", 00:21:43.752 "trsvcid": "4420", 00:21:43.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.752 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:43.752 "method": "bdev_nvme_attach_controller", 00:21:43.752 "req_id": 1 00:21:43.752 } 00:21:43.752 Got JSON-RPC error response 00:21:43.752 response: 00:21:43.752 { 00:21:43.752 "code": -22, 00:21:43.752 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:43.752 } 00:21:43.752 12:15:55 -- target/tls.sh@36 -- # killprocess 1527558 00:21:43.752 12:15:55 -- common/autotest_common.sh@926 -- # '[' -z 1527558 ']' 00:21:43.752 12:15:55 -- common/autotest_common.sh@930 -- # kill -0 1527558 00:21:43.752 12:15:55 -- common/autotest_common.sh@931 -- # uname 00:21:43.752 12:15:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.752 12:15:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1527558 00:21:43.752 12:15:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:43.752 12:15:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:43.752 12:15:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1527558' 00:21:43.752 killing process with pid 1527558 00:21:43.752 12:15:55 -- common/autotest_common.sh@945 -- # kill 1527558 00:21:43.752 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.752 00:21:43.752 Latency(us) 00:21:43.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.752 =================================================================================================================== 00:21:43.752 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.752 12:15:55 -- common/autotest_common.sh@950 -- # wait 1527558 00:21:43.752 12:15:55 -- target/tls.sh@37 -- # return 1 00:21:43.752 12:15:55 -- common/autotest_common.sh@643 -- # es=1 00:21:43.752 12:15:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:43.752 12:15:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:43.752 12:15:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:43.752 12:15:55 -- target/tls.sh@183 -- # killprocess 1524936 00:21:43.752 12:15:55 -- common/autotest_common.sh@926 -- # '[' -z 1524936 ']' 00:21:43.752 12:15:55 -- common/autotest_common.sh@930 -- # kill -0 1524936 00:21:43.752 12:15:55 -- common/autotest_common.sh@931 -- # uname 00:21:43.752 12:15:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.752 12:15:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1524936 00:21:43.752 12:15:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:43.752 12:15:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:43.752 12:15:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1524936' 00:21:43.752 killing process with pid 1524936 00:21:43.752 12:15:55 -- common/autotest_common.sh@945 -- # kill 1524936 00:21:43.752 12:15:55 -- common/autotest_common.sh@950 -- # wait 1524936 00:21:43.752 12:15:56 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:43.752 12:15:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:43.752 12:15:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:43.752 12:15:56 -- common/autotest_common.sh@10 -- # set +x 00:21:43.752 12:15:56 -- nvmf/common.sh@469 -- # nvmfpid=1527717 00:21:43.752 12:15:56 -- nvmf/common.sh@470 -- # waitforlisten 1527717 00:21:43.752 12:15:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:43.752 12:15:56 -- common/autotest_common.sh@819 -- # '[' -z 1527717 ']' 00:21:43.752 12:15:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.752 12:15:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:43.752 12:15:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.752 12:15:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:43.752 12:15:56 -- common/autotest_common.sh@10 -- # set +x 00:21:43.752 [2024-06-11 12:15:56.107688] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:43.752 [2024-06-11 12:15:56.107741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.752 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.752 [2024-06-11 12:15:56.190700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.752 [2024-06-11 12:15:56.219238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:43.752 [2024-06-11 12:15:56.219347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.752 [2024-06-11 12:15:56.219354] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.752 [2024-06-11 12:15:56.219359] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.752 [2024-06-11 12:15:56.219382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.012 12:15:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:44.012 12:15:56 -- common/autotest_common.sh@852 -- # return 0 00:21:44.012 12:15:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:44.012 12:15:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:44.012 12:15:56 -- common/autotest_common.sh@10 -- # set +x 00:21:44.012 12:15:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.012 12:15:56 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:44.012 12:15:56 -- common/autotest_common.sh@640 -- # local es=0 00:21:44.012 12:15:56 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:44.013 12:15:56 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:21:44.013 12:15:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:44.013 12:15:56 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:21:44.013 12:15:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:44.013 12:15:56 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:44.013 12:15:56 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:44.013 12:15:56 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:44.013 [2024-06-11 12:15:57.033971] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.272 12:15:57 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:44.272 12:15:57 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:44.532 [2024-06-11 12:15:57.318668] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.532 [2024-06-11 12:15:57.318822] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.532 12:15:57 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:44.532 malloc0 00:21:44.532 12:15:57 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:44.793 12:15:57 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:44.793 [2024-06-11 12:15:57.737404] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:44.793 [2024-06-11 12:15:57.737421] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:44.793 [2024-06-11 12:15:57.737436] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:44.793 request: 00:21:44.793 { 00:21:44.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.793 "host": "nqn.2016-06.io.spdk:host1", 00:21:44.793 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:44.793 "method": "nvmf_subsystem_add_host", 00:21:44.793 "req_id": 1 00:21:44.793 } 00:21:44.793 Got JSON-RPC error response 00:21:44.793 response: 00:21:44.793 { 00:21:44.793 "code": -32603, 00:21:44.793 "message": "Internal error" 00:21:44.793 } 00:21:44.793 12:15:57 -- common/autotest_common.sh@643 -- # es=1 00:21:44.793 12:15:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:44.793 12:15:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:44.793 12:15:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:44.793 12:15:57 -- target/tls.sh@189 -- # killprocess 1527717 00:21:44.793 12:15:57 -- common/autotest_common.sh@926 -- # '[' -z 1527717 ']' 00:21:44.793 12:15:57 -- common/autotest_common.sh@930 -- # kill -0 1527717 00:21:44.793 12:15:57 -- common/autotest_common.sh@931 -- # uname 00:21:44.793 12:15:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.793 12:15:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1527717 00:21:44.793 12:15:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:44.793 12:15:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:44.793 12:15:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1527717' 00:21:44.793 killing process with pid 1527717 00:21:44.793 12:15:57 -- common/autotest_common.sh@945 -- # kill 1527717 00:21:44.793 12:15:57 -- common/autotest_common.sh@950 -- # wait 1527717 00:21:45.053 12:15:57 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:45.053 12:15:57 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:21:45.053 12:15:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:45.053 12:15:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:45.053 12:15:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 12:15:57 -- nvmf/common.sh@469 -- # nvmfpid=1528157 00:21:45.053 12:15:57 -- nvmf/common.sh@470 -- # waitforlisten 1528157 00:21:45.053 12:15:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:45.053 12:15:57 -- common/autotest_common.sh@819 -- # '[' -z 1528157 ']' 00:21:45.053 12:15:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.053 12:15:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:45.053 12:15:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.053 12:15:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:45.053 12:15:57 -- common/autotest_common.sh@10 -- # set +x 00:21:45.053 [2024-06-11 12:15:57.976659] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:45.053 [2024-06-11 12:15:57.976712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.053 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.053 [2024-06-11 12:15:58.061301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.313 [2024-06-11 12:15:58.090364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.313 [2024-06-11 12:15:58.090469] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.313 [2024-06-11 12:15:58.090476] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.313 [2024-06-11 12:15:58.090482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.313 [2024-06-11 12:15:58.090499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.882 12:15:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:45.882 12:15:58 -- common/autotest_common.sh@852 -- # return 0 00:21:45.882 12:15:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:45.882 12:15:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:45.882 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:21:45.882 12:15:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.882 12:15:58 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:45.882 12:15:58 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:45.882 12:15:58 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:45.882 [2024-06-11 12:15:58.905355] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.142 12:15:58 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:46.142 12:15:59 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:46.401 [2024-06-11 12:15:59.190045] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.401 [2024-06-11 12:15:59.190216] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.401 12:15:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:46.401 malloc0 00:21:46.401 12:15:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:46.662 12:15:59 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:46.662 12:15:59 -- target/tls.sh@197 -- # bdevperf_pid=1528503 00:21:46.662 12:15:59 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.662 12:15:59 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.662 12:15:59 -- target/tls.sh@200 -- # waitforlisten 1528503 /var/tmp/bdevperf.sock 00:21:46.662 12:15:59 -- common/autotest_common.sh@819 -- # '[' -z 1528503 ']' 00:21:46.662 12:15:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.662 12:15:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.662 12:15:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.662 12:15:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.662 12:15:59 -- common/autotest_common.sh@10 -- # set +x 00:21:46.662 [2024-06-11 12:15:59.667712] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:46.662 [2024-06-11 12:15:59.667764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528503 ] 00:21:46.662 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.922 [2024-06-11 12:15:59.718099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.922 [2024-06-11 12:15:59.744707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.491 12:16:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.491 12:16:00 -- common/autotest_common.sh@852 -- # return 0 00:21:47.491 12:16:00 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:47.750 [2024-06-11 12:16:00.572582] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.750 TLSTESTn1 00:21:47.750 12:16:00 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:48.010 12:16:00 -- target/tls.sh@205 -- # tgtconf='{ 00:21:48.010 "subsystems": [ 00:21:48.010 { 00:21:48.010 "subsystem": "iobuf", 00:21:48.010 "config": [ 00:21:48.010 { 00:21:48.010 "method": "iobuf_set_options", 00:21:48.010 "params": { 00:21:48.011 "small_pool_count": 8192, 00:21:48.011 "large_pool_count": 1024, 00:21:48.011 "small_bufsize": 8192, 00:21:48.011 "large_bufsize": 135168 00:21:48.011 } 00:21:48.011 } 00:21:48.011 ] 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "subsystem": "sock", 00:21:48.011 "config": [ 00:21:48.011 { 00:21:48.011 "method": "sock_impl_set_options", 00:21:48.011 "params": { 00:21:48.011 "impl_name": "posix", 00:21:48.011 "recv_buf_size": 2097152, 00:21:48.011 "send_buf_size": 2097152, 00:21:48.011 "enable_recv_pipe": true, 00:21:48.011 "enable_quickack": false, 00:21:48.011 "enable_placement_id": 0, 00:21:48.011 "enable_zerocopy_send_server": true, 00:21:48.011 "enable_zerocopy_send_client": false, 00:21:48.011 "zerocopy_threshold": 0, 00:21:48.011 "tls_version": 0, 00:21:48.011 "enable_ktls": false 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "sock_impl_set_options", 00:21:48.011 "params": { 00:21:48.011 "impl_name": "ssl", 00:21:48.011 "recv_buf_size": 4096, 00:21:48.011 "send_buf_size": 4096, 00:21:48.011 "enable_recv_pipe": true, 00:21:48.011 "enable_quickack": false, 00:21:48.011 "enable_placement_id": 0, 00:21:48.011 "enable_zerocopy_send_server": true, 00:21:48.011 "enable_zerocopy_send_client": false, 00:21:48.011 "zerocopy_threshold": 0, 00:21:48.011 "tls_version": 0, 00:21:48.011 "enable_ktls": false 00:21:48.011 } 00:21:48.011 } 00:21:48.011 ] 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "subsystem": "vmd", 00:21:48.011 "config": [] 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "subsystem": "accel", 00:21:48.011 "config": [ 00:21:48.011 { 00:21:48.011 "method": "accel_set_options", 00:21:48.011 "params": { 00:21:48.011 "small_cache_size": 128, 00:21:48.011 "large_cache_size": 16, 00:21:48.011 "task_count": 2048, 00:21:48.011 "sequence_count": 2048, 00:21:48.011 "buf_count": 2048 00:21:48.011 } 00:21:48.011 } 00:21:48.011 ] 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "subsystem": "bdev", 00:21:48.011 "config": [ 00:21:48.011 { 00:21:48.011 "method": "bdev_set_options", 00:21:48.011 "params": { 00:21:48.011 "bdev_io_pool_size": 65535, 00:21:48.011 "bdev_io_cache_size": 256, 00:21:48.011 "bdev_auto_examine": true, 00:21:48.011 "iobuf_small_cache_size": 128, 00:21:48.011 "iobuf_large_cache_size": 16 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "bdev_raid_set_options", 00:21:48.011 "params": { 00:21:48.011 "process_window_size_kb": 1024 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "bdev_iscsi_set_options", 00:21:48.011 "params": { 00:21:48.011 "timeout_sec": 30 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "bdev_nvme_set_options", 00:21:48.011 "params": { 00:21:48.011 "action_on_timeout": "none", 00:21:48.011 "timeout_us": 0, 00:21:48.011 "timeout_admin_us": 0, 00:21:48.011 "keep_alive_timeout_ms": 10000, 00:21:48.011 "transport_retry_count": 4, 00:21:48.011 "arbitration_burst": 0, 00:21:48.011 "low_priority_weight": 0, 00:21:48.011 "medium_priority_weight": 0, 00:21:48.011 "high_priority_weight": 0, 00:21:48.011 "nvme_adminq_poll_period_us": 10000, 00:21:48.011 "nvme_ioq_poll_period_us": 0, 00:21:48.011 "io_queue_requests": 0, 00:21:48.011 "delay_cmd_submit": true, 00:21:48.011 "bdev_retry_count": 3, 00:21:48.011 "transport_ack_timeout": 0, 00:21:48.011 "ctrlr_loss_timeout_sec": 0, 00:21:48.011 "reconnect_delay_sec": 0, 00:21:48.011 "fast_io_fail_timeout_sec": 0, 00:21:48.011 "generate_uuids": false, 00:21:48.011 "transport_tos": 0, 00:21:48.011 "io_path_stat": false, 00:21:48.011 "allow_accel_sequence": false 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "bdev_nvme_set_hotplug", 00:21:48.011 "params": { 00:21:48.011 "period_us": 100000, 00:21:48.011 "enable": false 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "bdev_malloc_create", 00:21:48.011 "params": { 00:21:48.011 "name": "malloc0", 00:21:48.011 "num_blocks": 8192, 00:21:48.011 "block_size": 4096, 00:21:48.011 "physical_block_size": 4096, 00:21:48.011 "uuid": "e5f37140-3903-459c-b4cc-1f1c827407af", 00:21:48.011 "optimal_io_boundary": 0 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "bdev_wait_for_examine" 00:21:48.011 } 00:21:48.011 ] 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "subsystem": "nbd", 00:21:48.011 "config": [] 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "subsystem": "scheduler", 00:21:48.011 "config": [ 00:21:48.011 { 00:21:48.011 "method": "framework_set_scheduler", 00:21:48.011 "params": { 00:21:48.011 "name": "static" 00:21:48.011 } 00:21:48.011 } 00:21:48.011 ] 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "subsystem": "nvmf", 00:21:48.011 "config": [ 00:21:48.011 { 00:21:48.011 "method": "nvmf_set_config", 00:21:48.011 "params": { 00:21:48.011 "discovery_filter": "match_any", 00:21:48.011 "admin_cmd_passthru": { 00:21:48.011 "identify_ctrlr": false 00:21:48.011 } 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "nvmf_set_max_subsystems", 00:21:48.011 "params": { 00:21:48.011 "max_subsystems": 1024 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "nvmf_set_crdt", 00:21:48.011 "params": { 00:21:48.011 "crdt1": 0, 00:21:48.011 "crdt2": 0, 00:21:48.011 "crdt3": 0 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "nvmf_create_transport", 00:21:48.011 "params": { 00:21:48.011 "trtype": "TCP", 00:21:48.011 "max_queue_depth": 128, 00:21:48.011 "max_io_qpairs_per_ctrlr": 127, 00:21:48.011 "in_capsule_data_size": 4096, 00:21:48.011 "max_io_size": 131072, 00:21:48.011 "io_unit_size": 131072, 00:21:48.011 "max_aq_depth": 128, 00:21:48.011 "num_shared_buffers": 511, 00:21:48.011 "buf_cache_size": 4294967295, 00:21:48.011 "dif_insert_or_strip": false, 00:21:48.011 "zcopy": false, 00:21:48.011 "c2h_success": false, 00:21:48.011 "sock_priority": 0, 00:21:48.011 "abort_timeout_sec": 1 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "nvmf_create_subsystem", 00:21:48.011 "params": { 00:21:48.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.011 "allow_any_host": false, 00:21:48.011 "serial_number": "SPDK00000000000001", 00:21:48.011 "model_number": "SPDK bdev Controller", 00:21:48.011 "max_namespaces": 10, 00:21:48.011 "min_cntlid": 1, 00:21:48.011 "max_cntlid": 65519, 00:21:48.011 "ana_reporting": false 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "nvmf_subsystem_add_host", 00:21:48.011 "params": { 00:21:48.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.011 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.011 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "nvmf_subsystem_add_ns", 00:21:48.011 "params": { 00:21:48.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.011 "namespace": { 00:21:48.011 "nsid": 1, 00:21:48.011 "bdev_name": "malloc0", 00:21:48.011 "nguid": "E5F371403903459CB4CC1F1C827407AF", 00:21:48.011 "uuid": "e5f37140-3903-459c-b4cc-1f1c827407af" 00:21:48.011 } 00:21:48.011 } 00:21:48.011 }, 00:21:48.011 { 00:21:48.011 "method": "nvmf_subsystem_add_listener", 00:21:48.011 "params": { 00:21:48.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.011 "listen_address": { 00:21:48.011 "trtype": "TCP", 00:21:48.011 "adrfam": "IPv4", 00:21:48.012 "traddr": "10.0.0.2", 00:21:48.012 "trsvcid": "4420" 00:21:48.012 }, 00:21:48.012 "secure_channel": true 00:21:48.012 } 00:21:48.012 } 00:21:48.012 ] 00:21:48.012 } 00:21:48.012 ] 00:21:48.012 }' 00:21:48.012 12:16:00 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:48.271 12:16:01 -- target/tls.sh@206 -- # bdevperfconf='{ 00:21:48.271 "subsystems": [ 00:21:48.271 { 00:21:48.271 "subsystem": "iobuf", 00:21:48.271 "config": [ 00:21:48.271 { 00:21:48.271 "method": "iobuf_set_options", 00:21:48.271 "params": { 00:21:48.271 "small_pool_count": 8192, 00:21:48.271 "large_pool_count": 1024, 00:21:48.271 "small_bufsize": 8192, 00:21:48.271 "large_bufsize": 135168 00:21:48.271 } 00:21:48.271 } 00:21:48.271 ] 00:21:48.271 }, 00:21:48.271 { 00:21:48.271 "subsystem": "sock", 00:21:48.271 "config": [ 00:21:48.271 { 00:21:48.271 "method": "sock_impl_set_options", 00:21:48.271 "params": { 00:21:48.271 "impl_name": "posix", 00:21:48.271 "recv_buf_size": 2097152, 00:21:48.271 "send_buf_size": 2097152, 00:21:48.272 "enable_recv_pipe": true, 00:21:48.272 "enable_quickack": false, 00:21:48.272 "enable_placement_id": 0, 00:21:48.272 "enable_zerocopy_send_server": true, 00:21:48.272 "enable_zerocopy_send_client": false, 00:21:48.272 "zerocopy_threshold": 0, 00:21:48.272 "tls_version": 0, 00:21:48.272 "enable_ktls": false 00:21:48.272 } 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "method": "sock_impl_set_options", 00:21:48.272 "params": { 00:21:48.272 "impl_name": "ssl", 00:21:48.272 "recv_buf_size": 4096, 00:21:48.272 "send_buf_size": 4096, 00:21:48.272 "enable_recv_pipe": true, 00:21:48.272 "enable_quickack": false, 00:21:48.272 "enable_placement_id": 0, 00:21:48.272 "enable_zerocopy_send_server": true, 00:21:48.272 "enable_zerocopy_send_client": false, 00:21:48.272 "zerocopy_threshold": 0, 00:21:48.272 "tls_version": 0, 00:21:48.272 "enable_ktls": false 00:21:48.272 } 00:21:48.272 } 00:21:48.272 ] 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "subsystem": "vmd", 00:21:48.272 "config": [] 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "subsystem": "accel", 00:21:48.272 "config": [ 00:21:48.272 { 00:21:48.272 "method": "accel_set_options", 00:21:48.272 "params": { 00:21:48.272 "small_cache_size": 128, 00:21:48.272 "large_cache_size": 16, 00:21:48.272 "task_count": 2048, 00:21:48.272 "sequence_count": 2048, 00:21:48.272 "buf_count": 2048 00:21:48.272 } 00:21:48.272 } 00:21:48.272 ] 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "subsystem": "bdev", 00:21:48.272 "config": [ 00:21:48.272 { 00:21:48.272 "method": "bdev_set_options", 00:21:48.272 "params": { 00:21:48.272 "bdev_io_pool_size": 65535, 00:21:48.272 "bdev_io_cache_size": 256, 00:21:48.272 "bdev_auto_examine": true, 00:21:48.272 "iobuf_small_cache_size": 128, 00:21:48.272 "iobuf_large_cache_size": 16 00:21:48.272 } 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "method": "bdev_raid_set_options", 00:21:48.272 "params": { 00:21:48.272 "process_window_size_kb": 1024 00:21:48.272 } 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "method": "bdev_iscsi_set_options", 00:21:48.272 "params": { 00:21:48.272 "timeout_sec": 30 00:21:48.272 } 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "method": "bdev_nvme_set_options", 00:21:48.272 "params": { 00:21:48.272 "action_on_timeout": "none", 00:21:48.272 "timeout_us": 0, 00:21:48.272 "timeout_admin_us": 0, 00:21:48.272 "keep_alive_timeout_ms": 10000, 00:21:48.272 "transport_retry_count": 4, 00:21:48.272 "arbitration_burst": 0, 00:21:48.272 "low_priority_weight": 0, 00:21:48.272 "medium_priority_weight": 0, 00:21:48.272 "high_priority_weight": 0, 00:21:48.272 "nvme_adminq_poll_period_us": 10000, 00:21:48.272 "nvme_ioq_poll_period_us": 0, 00:21:48.272 "io_queue_requests": 512, 00:21:48.272 "delay_cmd_submit": true, 00:21:48.272 "bdev_retry_count": 3, 00:21:48.272 "transport_ack_timeout": 0, 00:21:48.272 "ctrlr_loss_timeout_sec": 0, 00:21:48.272 "reconnect_delay_sec": 0, 00:21:48.272 "fast_io_fail_timeout_sec": 0, 00:21:48.272 "generate_uuids": false, 00:21:48.272 "transport_tos": 0, 00:21:48.272 "io_path_stat": false, 00:21:48.272 "allow_accel_sequence": false 00:21:48.272 } 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "method": "bdev_nvme_attach_controller", 00:21:48.272 "params": { 00:21:48.272 "name": "TLSTEST", 00:21:48.272 "trtype": "TCP", 00:21:48.272 "adrfam": "IPv4", 00:21:48.272 "traddr": "10.0.0.2", 00:21:48.272 "trsvcid": "4420", 00:21:48.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.272 "prchk_reftag": false, 00:21:48.272 "prchk_guard": false, 00:21:48.272 "ctrlr_loss_timeout_sec": 0, 00:21:48.272 "reconnect_delay_sec": 0, 00:21:48.272 "fast_io_fail_timeout_sec": 0, 00:21:48.272 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:48.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.272 "hdgst": false, 00:21:48.272 "ddgst": false 00:21:48.272 } 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "method": "bdev_nvme_set_hotplug", 00:21:48.272 "params": { 00:21:48.272 "period_us": 100000, 00:21:48.272 "enable": false 00:21:48.272 } 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "method": "bdev_wait_for_examine" 00:21:48.272 } 00:21:48.272 ] 00:21:48.272 }, 00:21:48.272 { 00:21:48.272 "subsystem": "nbd", 00:21:48.272 "config": [] 00:21:48.272 } 00:21:48.272 ] 00:21:48.272 }' 00:21:48.272 12:16:01 -- target/tls.sh@208 -- # killprocess 1528503 00:21:48.272 12:16:01 -- common/autotest_common.sh@926 -- # '[' -z 1528503 ']' 00:21:48.272 12:16:01 -- common/autotest_common.sh@930 -- # kill -0 1528503 00:21:48.272 12:16:01 -- common/autotest_common.sh@931 -- # uname 00:21:48.272 12:16:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.272 12:16:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1528503 00:21:48.272 12:16:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:48.272 12:16:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:48.272 12:16:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1528503' 00:21:48.272 killing process with pid 1528503 00:21:48.272 12:16:01 -- common/autotest_common.sh@945 -- # kill 1528503 00:21:48.272 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.272 00:21:48.272 Latency(us) 00:21:48.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.272 =================================================================================================================== 00:21:48.272 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.272 12:16:01 -- common/autotest_common.sh@950 -- # wait 1528503 00:21:48.272 12:16:01 -- target/tls.sh@209 -- # killprocess 1528157 00:21:48.272 12:16:01 -- common/autotest_common.sh@926 -- # '[' -z 1528157 ']' 00:21:48.272 12:16:01 -- common/autotest_common.sh@930 -- # kill -0 1528157 00:21:48.272 12:16:01 -- common/autotest_common.sh@931 -- # uname 00:21:48.272 12:16:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.272 12:16:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1528157 00:21:48.532 12:16:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:48.532 12:16:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:48.532 12:16:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1528157' 00:21:48.532 killing process with pid 1528157 00:21:48.532 12:16:01 -- common/autotest_common.sh@945 -- # kill 1528157 00:21:48.532 12:16:01 -- common/autotest_common.sh@950 -- # wait 1528157 00:21:48.532 12:16:01 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:48.532 12:16:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:48.532 12:16:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:48.532 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.532 12:16:01 -- target/tls.sh@212 -- # echo '{ 00:21:48.532 "subsystems": [ 00:21:48.532 { 00:21:48.532 "subsystem": "iobuf", 00:21:48.532 "config": [ 00:21:48.532 { 00:21:48.532 "method": "iobuf_set_options", 00:21:48.532 "params": { 00:21:48.532 "small_pool_count": 8192, 00:21:48.532 "large_pool_count": 1024, 00:21:48.532 "small_bufsize": 8192, 00:21:48.532 "large_bufsize": 135168 00:21:48.532 } 00:21:48.532 } 00:21:48.532 ] 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "subsystem": "sock", 00:21:48.532 "config": [ 00:21:48.532 { 00:21:48.532 "method": "sock_impl_set_options", 00:21:48.532 "params": { 00:21:48.532 "impl_name": "posix", 00:21:48.532 "recv_buf_size": 2097152, 00:21:48.532 "send_buf_size": 2097152, 00:21:48.532 "enable_recv_pipe": true, 00:21:48.532 "enable_quickack": false, 00:21:48.532 "enable_placement_id": 0, 00:21:48.532 "enable_zerocopy_send_server": true, 00:21:48.532 "enable_zerocopy_send_client": false, 00:21:48.532 "zerocopy_threshold": 0, 00:21:48.532 "tls_version": 0, 00:21:48.532 "enable_ktls": false 00:21:48.532 } 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "method": "sock_impl_set_options", 00:21:48.532 "params": { 00:21:48.532 "impl_name": "ssl", 00:21:48.532 "recv_buf_size": 4096, 00:21:48.532 "send_buf_size": 4096, 00:21:48.532 "enable_recv_pipe": true, 00:21:48.532 "enable_quickack": false, 00:21:48.532 "enable_placement_id": 0, 00:21:48.532 "enable_zerocopy_send_server": true, 00:21:48.532 "enable_zerocopy_send_client": false, 00:21:48.532 "zerocopy_threshold": 0, 00:21:48.532 "tls_version": 0, 00:21:48.532 "enable_ktls": false 00:21:48.532 } 00:21:48.532 } 00:21:48.532 ] 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "subsystem": "vmd", 00:21:48.532 "config": [] 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "subsystem": "accel", 00:21:48.532 "config": [ 00:21:48.532 { 00:21:48.532 "method": "accel_set_options", 00:21:48.532 "params": { 00:21:48.532 "small_cache_size": 128, 00:21:48.532 "large_cache_size": 16, 00:21:48.532 "task_count": 2048, 00:21:48.532 "sequence_count": 2048, 00:21:48.532 "buf_count": 2048 00:21:48.532 } 00:21:48.532 } 00:21:48.532 ] 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "subsystem": "bdev", 00:21:48.532 "config": [ 00:21:48.532 { 00:21:48.532 "method": "bdev_set_options", 00:21:48.532 "params": { 00:21:48.532 "bdev_io_pool_size": 65535, 00:21:48.532 "bdev_io_cache_size": 256, 00:21:48.532 "bdev_auto_examine": true, 00:21:48.532 "iobuf_small_cache_size": 128, 00:21:48.532 "iobuf_large_cache_size": 16 00:21:48.532 } 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "method": "bdev_raid_set_options", 00:21:48.532 "params": { 00:21:48.532 "process_window_size_kb": 1024 00:21:48.532 } 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "method": "bdev_iscsi_set_options", 00:21:48.532 "params": { 00:21:48.532 "timeout_sec": 30 00:21:48.532 } 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "method": "bdev_nvme_set_options", 00:21:48.532 "params": { 00:21:48.532 "action_on_timeout": "none", 00:21:48.532 "timeout_us": 0, 00:21:48.532 "timeout_admin_us": 0, 00:21:48.532 "keep_alive_timeout_ms": 10000, 00:21:48.532 "transport_retry_count": 4, 00:21:48.532 "arbitration_burst": 0, 00:21:48.532 "low_priority_weight": 0, 00:21:48.532 "medium_priority_weight": 0, 00:21:48.532 "high_priority_weight": 0, 00:21:48.532 "nvme_adminq_poll_period_us": 10000, 00:21:48.532 "nvme_ioq_poll_period_us": 0, 00:21:48.532 "io_queue_requests": 0, 00:21:48.532 "delay_cmd_submit": true, 00:21:48.532 "bdev_retry_count": 3, 00:21:48.532 "transport_ack_timeout": 0, 00:21:48.532 "ctrlr_loss_timeout_sec": 0, 00:21:48.532 "reconnect_delay_sec": 0, 00:21:48.532 "fast_io_fail_timeout_sec": 0, 00:21:48.532 "generate_uuids": false, 00:21:48.532 "transport_tos": 0, 00:21:48.532 "io_path_stat": false, 00:21:48.532 "allow_accel_sequence": false 00:21:48.532 } 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "method": "bdev_nvme_set_hotplug", 00:21:48.532 "params": { 00:21:48.532 "period_us": 100000, 00:21:48.532 "enable": false 00:21:48.532 } 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "method": "bdev_malloc_create", 00:21:48.532 "params": { 00:21:48.532 "name": "malloc0", 00:21:48.532 "num_blocks": 8192, 00:21:48.532 "block_size": 4096, 00:21:48.532 "physical_block_size": 4096, 00:21:48.532 "uuid": "e5f37140-3903-459c-b4cc-1f1c827407af", 00:21:48.532 "optimal_io_boundary": 0 00:21:48.532 } 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "method": "bdev_wait_for_examine" 00:21:48.532 } 00:21:48.532 ] 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "subsystem": "nbd", 00:21:48.532 "config": [] 00:21:48.532 }, 00:21:48.532 { 00:21:48.532 "subsystem": "scheduler", 00:21:48.532 "config": [ 00:21:48.532 { 00:21:48.532 "method": "framework_set_scheduler", 00:21:48.532 "params": { 00:21:48.532 "name": "static" 00:21:48.532 } 00:21:48.532 } 00:21:48.532 ] 00:21:48.532 }, 00:21:48.532 { 00:21:48.533 "subsystem": "nvmf", 00:21:48.533 "config": [ 00:21:48.533 { 00:21:48.533 "method": "nvmf_set_config", 00:21:48.533 "params": { 00:21:48.533 "discovery_filter": "match_any", 00:21:48.533 "admin_cmd_passthru": { 00:21:48.533 "identify_ctrlr": false 00:21:48.533 } 00:21:48.533 } 00:21:48.533 }, 00:21:48.533 { 00:21:48.533 "method": "nvmf_set_max_subsystems", 00:21:48.533 "params": { 00:21:48.533 "max_subsystems": 1024 00:21:48.533 } 00:21:48.533 }, 00:21:48.533 { 00:21:48.533 "method": "nvmf_set_crdt", 00:21:48.533 "params": { 00:21:48.533 "crdt1": 0, 00:21:48.533 "crdt2": 0, 00:21:48.533 "crdt3": 0 00:21:48.533 } 00:21:48.533 }, 00:21:48.533 { 00:21:48.533 "method": "nvmf_create_transport", 00:21:48.533 "params": { 00:21:48.533 "trtype": "TCP", 00:21:48.533 "max_queue_depth": 128, 00:21:48.533 "max_io_qpairs_per_ctrlr": 127, 00:21:48.533 "in_capsule_data_size": 4096, 00:21:48.533 "max_io_size": 131072, 00:21:48.533 "io_unit_size": 131072, 00:21:48.533 "max_aq_depth": 128, 00:21:48.533 "num_shared_buffers": 511, 00:21:48.533 "buf_cache_size": 4294967295, 00:21:48.533 "dif_insert_or_strip": false, 00:21:48.533 "zcopy": false, 00:21:48.533 "c2h_success": false, 00:21:48.533 "sock_priority": 0, 00:21:48.533 "abort_timeout_sec": 1 00:21:48.533 } 00:21:48.533 }, 00:21:48.533 { 00:21:48.533 "method": "nvmf_create_subsystem", 00:21:48.533 "params": { 00:21:48.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.533 "allow_any_host": false, 00:21:48.533 "serial_number": "SPDK00000000000001", 00:21:48.533 "model_number": "SPDK bdev Controller", 00:21:48.533 "max_namespaces": 10, 00:21:48.533 "min_cntlid": 1, 00:21:48.533 "max_cntlid": 65519, 00:21:48.533 "ana_reporting": false 00:21:48.533 } 00:21:48.533 }, 00:21:48.533 { 00:21:48.533 "method": "nvmf_subsystem_add_host", 00:21:48.533 "params": { 00:21:48.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.533 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.533 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:48.533 } 00:21:48.533 }, 00:21:48.533 { 00:21:48.533 "method": "nvmf_subsystem_add_ns", 00:21:48.533 "params": { 00:21:48.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.533 "namespace": { 00:21:48.533 "nsid": 1, 00:21:48.533 "bdev_name": "malloc0", 00:21:48.533 "nguid": "E5F371403903459CB4CC1F1C827407AF", 00:21:48.533 "uuid": "e5f37140-3903-459c-b4cc-1f1c827407af" 00:21:48.533 } 00:21:48.533 } 00:21:48.533 }, 00:21:48.533 { 00:21:48.533 "method": "nvmf_subsystem_add_listener", 00:21:48.533 "params": { 00:21:48.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.533 "listen_address": { 00:21:48.533 "trtype": "TCP", 00:21:48.533 "adrfam": "IPv4", 00:21:48.533 "traddr": "10.0.0.2", 00:21:48.533 "trsvcid": "4420" 00:21:48.533 }, 00:21:48.533 "secure_channel": true 00:21:48.533 } 00:21:48.533 } 00:21:48.533 ] 00:21:48.533 } 00:21:48.533 ] 00:21:48.533 }' 00:21:48.533 12:16:01 -- nvmf/common.sh@469 -- # nvmfpid=1528923 00:21:48.533 12:16:01 -- nvmf/common.sh@470 -- # waitforlisten 1528923 00:21:48.533 12:16:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:48.533 12:16:01 -- common/autotest_common.sh@819 -- # '[' -z 1528923 ']' 00:21:48.533 12:16:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.533 12:16:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:48.533 12:16:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.533 12:16:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:48.533 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:21:48.533 [2024-06-11 12:16:01.500855] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:48.533 [2024-06-11 12:16:01.500910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.533 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.793 [2024-06-11 12:16:01.584308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.793 [2024-06-11 12:16:01.611168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:48.793 [2024-06-11 12:16:01.611260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.793 [2024-06-11 12:16:01.611267] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.793 [2024-06-11 12:16:01.611271] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.793 [2024-06-11 12:16:01.611290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.793 [2024-06-11 12:16:01.781009] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.793 [2024-06-11 12:16:01.813041] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.793 [2024-06-11 12:16:01.813218] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.363 12:16:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.363 12:16:02 -- common/autotest_common.sh@852 -- # return 0 00:21:49.363 12:16:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:49.363 12:16:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.363 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:21:49.363 12:16:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.363 12:16:02 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:49.363 12:16:02 -- target/tls.sh@216 -- # bdevperf_pid=1529138 00:21:49.363 12:16:02 -- target/tls.sh@217 -- # waitforlisten 1529138 /var/tmp/bdevperf.sock 00:21:49.363 12:16:02 -- common/autotest_common.sh@819 -- # '[' -z 1529138 ']' 00:21:49.363 12:16:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.363 12:16:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.363 12:16:02 -- target/tls.sh@213 -- # echo '{ 00:21:49.363 "subsystems": [ 00:21:49.363 { 00:21:49.363 "subsystem": "iobuf", 00:21:49.363 "config": [ 00:21:49.363 { 00:21:49.363 "method": "iobuf_set_options", 00:21:49.363 "params": { 00:21:49.363 "small_pool_count": 8192, 00:21:49.363 "large_pool_count": 1024, 00:21:49.363 "small_bufsize": 8192, 00:21:49.363 "large_bufsize": 135168 00:21:49.364 } 00:21:49.364 } 00:21:49.364 ] 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "subsystem": "sock", 00:21:49.364 "config": [ 00:21:49.364 { 00:21:49.364 "method": "sock_impl_set_options", 00:21:49.364 "params": { 00:21:49.364 "impl_name": "posix", 00:21:49.364 "recv_buf_size": 2097152, 00:21:49.364 "send_buf_size": 2097152, 00:21:49.364 "enable_recv_pipe": true, 00:21:49.364 "enable_quickack": false, 00:21:49.364 "enable_placement_id": 0, 00:21:49.364 "enable_zerocopy_send_server": true, 00:21:49.364 "enable_zerocopy_send_client": false, 00:21:49.364 "zerocopy_threshold": 0, 00:21:49.364 "tls_version": 0, 00:21:49.364 "enable_ktls": false 00:21:49.364 } 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "method": "sock_impl_set_options", 00:21:49.364 "params": { 00:21:49.364 "impl_name": "ssl", 00:21:49.364 "recv_buf_size": 4096, 00:21:49.364 "send_buf_size": 4096, 00:21:49.364 "enable_recv_pipe": true, 00:21:49.364 "enable_quickack": false, 00:21:49.364 "enable_placement_id": 0, 00:21:49.364 "enable_zerocopy_send_server": true, 00:21:49.364 "enable_zerocopy_send_client": false, 00:21:49.364 "zerocopy_threshold": 0, 00:21:49.364 "tls_version": 0, 00:21:49.364 "enable_ktls": false 00:21:49.364 } 00:21:49.364 } 00:21:49.364 ] 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "subsystem": "vmd", 00:21:49.364 "config": [] 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "subsystem": "accel", 00:21:49.364 "config": [ 00:21:49.364 { 00:21:49.364 "method": "accel_set_options", 00:21:49.364 "params": { 00:21:49.364 "small_cache_size": 128, 00:21:49.364 "large_cache_size": 16, 00:21:49.364 "task_count": 2048, 00:21:49.364 "sequence_count": 2048, 00:21:49.364 "buf_count": 2048 00:21:49.364 } 00:21:49.364 } 00:21:49.364 ] 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "subsystem": "bdev", 00:21:49.364 "config": [ 00:21:49.364 { 00:21:49.364 "method": "bdev_set_options", 00:21:49.364 "params": { 00:21:49.364 "bdev_io_pool_size": 65535, 00:21:49.364 "bdev_io_cache_size": 256, 00:21:49.364 "bdev_auto_examine": true, 00:21:49.364 "iobuf_small_cache_size": 128, 00:21:49.364 "iobuf_large_cache_size": 16 00:21:49.364 } 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "method": "bdev_raid_set_options", 00:21:49.364 "params": { 00:21:49.364 "process_window_size_kb": 1024 00:21:49.364 } 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "method": "bdev_iscsi_set_options", 00:21:49.364 "params": { 00:21:49.364 "timeout_sec": 30 00:21:49.364 } 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "method": "bdev_nvme_set_options", 00:21:49.364 "params": { 00:21:49.364 "action_on_timeout": "none", 00:21:49.364 "timeout_us": 0, 00:21:49.364 "timeout_admin_us": 0, 00:21:49.364 "keep_alive_timeout_ms": 10000, 00:21:49.364 "transport_retry_count": 4, 00:21:49.364 "arbitration_burst": 0, 00:21:49.364 "low_priority_weight": 0, 00:21:49.364 "medium_priority_weight": 0, 00:21:49.364 "high_priority_weight": 0, 00:21:49.364 "nvme_adminq_poll_period_us": 10000, 00:21:49.364 "nvme_ioq_poll_period_us": 0, 00:21:49.364 "io_queue_requests": 512, 00:21:49.364 "delay_cmd_submit": true, 00:21:49.364 "bdev_retry_count": 3, 00:21:49.364 "transport_ack_timeout": 0, 00:21:49.364 "ctrlr_loss_timeout_sec": 0, 00:21:49.364 "reconnect_delay_sec": 0, 00:21:49.364 "fast_io_fail_timeout_sec": 0, 00:21:49.364 "generate_uuids": false, 00:21:49.364 "transport_tos": 0, 00:21:49.364 "io_path_stat": false, 00:21:49.364 "allow_accel_sequence": false 00:21:49.364 } 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "method": "bdev_nvme_attach_controller", 00:21:49.364 "params": { 00:21:49.364 "name": "TLSTEST", 00:21:49.364 "trtype": "TCP", 00:21:49.364 "adrfam": "IPv4", 00:21:49.364 "traddr": "10.0.0.2", 00:21:49.364 "trsvcid": "4420", 00:21:49.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.364 "prchk_reftag": false, 00:21:49.364 "prchk_guard": false, 00:21:49.364 "ctrlr_loss_timeout_sec": 0, 00:21:49.364 "reconnect_delay_sec": 0, 00:21:49.364 "fast_io_fail_timeout_sec": 0, 00:21:49.364 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:49.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.364 "hdgst": false, 00:21:49.364 "ddgst": false 00:21:49.364 } 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "method": "bdev_nvme_set_hotplug", 00:21:49.364 "params": { 00:21:49.364 "period_us": 100000, 00:21:49.364 "enable": false 00:21:49.364 } 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "method": "bdev_wait_for_examine" 00:21:49.364 } 00:21:49.364 ] 00:21:49.364 }, 00:21:49.364 { 00:21:49.364 "subsystem": "nbd", 00:21:49.364 "config": [] 00:21:49.364 } 00:21:49.364 ] 00:21:49.364 }' 00:21:49.364 12:16:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.364 12:16:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.364 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:21:49.364 [2024-06-11 12:16:02.310554] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:49.364 [2024-06-11 12:16:02.310595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529138 ] 00:21:49.364 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.364 [2024-06-11 12:16:02.353839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.364 [2024-06-11 12:16:02.380318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.623 [2024-06-11 12:16:02.490804] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.194 12:16:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:50.194 12:16:03 -- common/autotest_common.sh@852 -- # return 0 00:21:50.194 12:16:03 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:50.194 Running I/O for 10 seconds... 00:22:00.194 00:22:00.195 Latency(us) 00:22:00.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.195 Verification LBA range: start 0x0 length 0x2000 00:22:00.195 TLSTESTn1 : 10.02 6869.42 26.83 0.00 0.00 18605.87 4123.31 50899.63 00:22:00.195 =================================================================================================================== 00:22:00.195 Total : 6869.42 26.83 0.00 0.00 18605.87 4123.31 50899.63 00:22:00.195 0 00:22:00.455 12:16:13 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.455 12:16:13 -- target/tls.sh@223 -- # killprocess 1529138 00:22:00.455 12:16:13 -- common/autotest_common.sh@926 -- # '[' -z 1529138 ']' 00:22:00.455 12:16:13 -- common/autotest_common.sh@930 -- # kill -0 1529138 00:22:00.455 12:16:13 -- common/autotest_common.sh@931 -- # uname 00:22:00.455 12:16:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.455 12:16:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1529138 00:22:00.455 12:16:13 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:00.455 12:16:13 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:00.456 12:16:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1529138' 00:22:00.456 killing process with pid 1529138 00:22:00.456 12:16:13 -- common/autotest_common.sh@945 -- # kill 1529138 00:22:00.456 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.456 00:22:00.456 Latency(us) 00:22:00.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.456 =================================================================================================================== 00:22:00.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.456 12:16:13 -- common/autotest_common.sh@950 -- # wait 1529138 00:22:00.456 12:16:13 -- target/tls.sh@224 -- # killprocess 1528923 00:22:00.456 12:16:13 -- common/autotest_common.sh@926 -- # '[' -z 1528923 ']' 00:22:00.456 12:16:13 -- common/autotest_common.sh@930 -- # kill -0 1528923 00:22:00.456 12:16:13 -- common/autotest_common.sh@931 -- # uname 00:22:00.456 12:16:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.456 12:16:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1528923 00:22:00.456 12:16:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:00.456 12:16:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:00.456 12:16:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1528923' 00:22:00.456 killing process with pid 1528923 00:22:00.456 12:16:13 -- common/autotest_common.sh@945 -- # kill 1528923 00:22:00.456 12:16:13 -- common/autotest_common.sh@950 -- # wait 1528923 00:22:00.717 12:16:13 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:22:00.717 12:16:13 -- target/tls.sh@227 -- # cleanup 00:22:00.717 12:16:13 -- target/tls.sh@15 -- # process_shm --id 0 00:22:00.717 12:16:13 -- common/autotest_common.sh@796 -- # type=--id 00:22:00.717 12:16:13 -- common/autotest_common.sh@797 -- # id=0 00:22:00.717 12:16:13 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:00.717 12:16:13 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:00.717 12:16:13 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:00.717 12:16:13 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:00.717 12:16:13 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:00.717 12:16:13 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:00.717 nvmf_trace.0 00:22:00.717 12:16:13 -- common/autotest_common.sh@811 -- # return 0 00:22:00.717 12:16:13 -- target/tls.sh@16 -- # killprocess 1529138 00:22:00.717 12:16:13 -- common/autotest_common.sh@926 -- # '[' -z 1529138 ']' 00:22:00.717 12:16:13 -- common/autotest_common.sh@930 -- # kill -0 1529138 00:22:00.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1529138) - No such process 00:22:00.717 12:16:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1529138 is not found' 00:22:00.717 Process with pid 1529138 is not found 00:22:00.717 12:16:13 -- target/tls.sh@17 -- # nvmftestfini 00:22:00.717 12:16:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:00.717 12:16:13 -- nvmf/common.sh@116 -- # sync 00:22:00.717 12:16:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:00.717 12:16:13 -- nvmf/common.sh@119 -- # set +e 00:22:00.717 12:16:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:00.717 12:16:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:00.717 rmmod nvme_tcp 00:22:00.717 rmmod nvme_fabrics 00:22:00.717 rmmod nvme_keyring 00:22:00.717 12:16:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:00.717 12:16:13 -- nvmf/common.sh@123 -- # set -e 00:22:00.717 12:16:13 -- nvmf/common.sh@124 -- # return 0 00:22:00.717 12:16:13 -- nvmf/common.sh@477 -- # '[' -n 1528923 ']' 00:22:00.717 12:16:13 -- nvmf/common.sh@478 -- # killprocess 1528923 00:22:00.717 12:16:13 -- common/autotest_common.sh@926 -- # '[' -z 1528923 ']' 00:22:00.717 12:16:13 -- common/autotest_common.sh@930 -- # kill -0 1528923 00:22:00.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1528923) - No such process 00:22:00.717 12:16:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1528923 is not found' 00:22:00.717 Process with pid 1528923 is not found 00:22:00.717 12:16:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:00.717 12:16:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:00.717 12:16:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:00.717 12:16:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.717 12:16:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:00.717 12:16:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.717 12:16:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.717 12:16:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.262 12:16:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:03.262 12:16:15 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:03.262 00:22:03.262 real 1m11.452s 00:22:03.262 user 1m45.603s 00:22:03.262 sys 0m24.547s 00:22:03.262 12:16:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.262 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:22:03.262 ************************************ 00:22:03.262 END TEST nvmf_tls 00:22:03.263 ************************************ 00:22:03.263 12:16:15 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:03.263 12:16:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:03.263 12:16:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:03.263 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:22:03.263 ************************************ 00:22:03.263 START TEST nvmf_fips 00:22:03.263 ************************************ 00:22:03.263 12:16:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:03.263 * Looking for test storage... 00:22:03.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:03.263 12:16:15 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.263 12:16:15 -- nvmf/common.sh@7 -- # uname -s 00:22:03.263 12:16:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.263 12:16:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.263 12:16:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.263 12:16:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.263 12:16:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.263 12:16:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.263 12:16:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.263 12:16:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.263 12:16:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.263 12:16:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.263 12:16:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.263 12:16:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.263 12:16:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.263 12:16:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.263 12:16:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.263 12:16:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.263 12:16:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.263 12:16:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.263 12:16:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.263 12:16:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.263 12:16:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.263 12:16:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.263 12:16:15 -- paths/export.sh@5 -- # export PATH 00:22:03.263 12:16:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.263 12:16:15 -- nvmf/common.sh@46 -- # : 0 00:22:03.263 12:16:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:03.263 12:16:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:03.263 12:16:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:03.263 12:16:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.263 12:16:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.263 12:16:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:03.263 12:16:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:03.263 12:16:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:03.263 12:16:15 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:03.263 12:16:15 -- fips/fips.sh@89 -- # check_openssl_version 00:22:03.263 12:16:15 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:03.263 12:16:15 -- fips/fips.sh@85 -- # openssl version 00:22:03.263 12:16:15 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:03.263 12:16:15 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:03.263 12:16:15 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:03.263 12:16:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:03.263 12:16:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:03.263 12:16:15 -- scripts/common.sh@335 -- # IFS=.-: 00:22:03.263 12:16:15 -- scripts/common.sh@335 -- # read -ra ver1 00:22:03.263 12:16:15 -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.263 12:16:15 -- scripts/common.sh@336 -- # read -ra ver2 00:22:03.263 12:16:15 -- scripts/common.sh@337 -- # local 'op=>=' 00:22:03.263 12:16:15 -- scripts/common.sh@339 -- # ver1_l=3 00:22:03.263 12:16:15 -- scripts/common.sh@340 -- # ver2_l=3 00:22:03.263 12:16:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:03.263 12:16:15 -- scripts/common.sh@343 -- # case "$op" in 00:22:03.263 12:16:15 -- scripts/common.sh@347 -- # : 1 00:22:03.263 12:16:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:03.263 12:16:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.263 12:16:15 -- scripts/common.sh@364 -- # decimal 3 00:22:03.263 12:16:15 -- scripts/common.sh@352 -- # local d=3 00:22:03.263 12:16:15 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:03.263 12:16:15 -- scripts/common.sh@354 -- # echo 3 00:22:03.263 12:16:15 -- scripts/common.sh@364 -- # ver1[v]=3 00:22:03.263 12:16:15 -- scripts/common.sh@365 -- # decimal 3 00:22:03.263 12:16:15 -- scripts/common.sh@352 -- # local d=3 00:22:03.263 12:16:15 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:03.263 12:16:15 -- scripts/common.sh@354 -- # echo 3 00:22:03.263 12:16:15 -- scripts/common.sh@365 -- # ver2[v]=3 00:22:03.263 12:16:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:03.263 12:16:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:03.263 12:16:15 -- scripts/common.sh@363 -- # (( v++ )) 00:22:03.263 12:16:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.263 12:16:15 -- scripts/common.sh@364 -- # decimal 0 00:22:03.263 12:16:15 -- scripts/common.sh@352 -- # local d=0 00:22:03.263 12:16:15 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:03.263 12:16:15 -- scripts/common.sh@354 -- # echo 0 00:22:03.263 12:16:15 -- scripts/common.sh@364 -- # ver1[v]=0 00:22:03.263 12:16:15 -- scripts/common.sh@365 -- # decimal 0 00:22:03.263 12:16:15 -- scripts/common.sh@352 -- # local d=0 00:22:03.263 12:16:15 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:03.263 12:16:15 -- scripts/common.sh@354 -- # echo 0 00:22:03.263 12:16:15 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:03.263 12:16:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:03.263 12:16:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:03.263 12:16:15 -- scripts/common.sh@363 -- # (( v++ )) 00:22:03.263 12:16:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.263 12:16:15 -- scripts/common.sh@364 -- # decimal 9 00:22:03.263 12:16:15 -- scripts/common.sh@352 -- # local d=9 00:22:03.263 12:16:15 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:03.263 12:16:15 -- scripts/common.sh@354 -- # echo 9 00:22:03.263 12:16:15 -- scripts/common.sh@364 -- # ver1[v]=9 00:22:03.263 12:16:15 -- scripts/common.sh@365 -- # decimal 0 00:22:03.263 12:16:15 -- scripts/common.sh@352 -- # local d=0 00:22:03.263 12:16:16 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:03.263 12:16:16 -- scripts/common.sh@354 -- # echo 0 00:22:03.263 12:16:16 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:03.263 12:16:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:03.263 12:16:16 -- scripts/common.sh@366 -- # return 0 00:22:03.263 12:16:16 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:03.263 12:16:16 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:03.263 12:16:16 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:03.263 12:16:16 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:03.263 12:16:16 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:03.263 12:16:16 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:03.263 12:16:16 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:03.263 12:16:16 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:03.263 12:16:16 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:03.263 12:16:16 -- fips/fips.sh@114 -- # build_openssl_config 00:22:03.263 12:16:16 -- fips/fips.sh@37 -- # cat 00:22:03.263 12:16:16 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:03.263 12:16:16 -- fips/fips.sh@58 -- # cat - 00:22:03.263 12:16:16 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:03.263 12:16:16 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:03.263 12:16:16 -- fips/fips.sh@117 -- # mapfile -t providers 00:22:03.263 12:16:16 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:22:03.263 12:16:16 -- fips/fips.sh@117 -- # openssl list -providers 00:22:03.263 12:16:16 -- fips/fips.sh@117 -- # grep name 00:22:03.263 12:16:16 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:03.264 12:16:16 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:03.264 12:16:16 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:03.264 12:16:16 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:03.264 12:16:16 -- common/autotest_common.sh@640 -- # local es=0 00:22:03.264 12:16:16 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:03.264 12:16:16 -- fips/fips.sh@128 -- # : 00:22:03.264 12:16:16 -- common/autotest_common.sh@628 -- # local arg=openssl 00:22:03.264 12:16:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:03.264 12:16:16 -- common/autotest_common.sh@632 -- # type -t openssl 00:22:03.264 12:16:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:03.264 12:16:16 -- common/autotest_common.sh@634 -- # type -P openssl 00:22:03.264 12:16:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:03.264 12:16:16 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:22:03.264 12:16:16 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:22:03.264 12:16:16 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:22:03.264 Error setting digest 00:22:03.264 0042FB83AB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:03.264 0042FB83AB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:03.264 12:16:16 -- common/autotest_common.sh@643 -- # es=1 00:22:03.264 12:16:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:03.264 12:16:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:03.264 12:16:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:03.264 12:16:16 -- fips/fips.sh@131 -- # nvmftestinit 00:22:03.264 12:16:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:03.264 12:16:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.264 12:16:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:03.264 12:16:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:03.264 12:16:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:03.264 12:16:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.264 12:16:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.264 12:16:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.264 12:16:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:03.264 12:16:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:03.264 12:16:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:03.264 12:16:16 -- common/autotest_common.sh@10 -- # set +x 00:22:11.406 12:16:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:11.406 12:16:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:11.406 12:16:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:11.406 12:16:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:11.406 12:16:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:11.406 12:16:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:11.406 12:16:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:11.406 12:16:23 -- nvmf/common.sh@294 -- # net_devs=() 00:22:11.406 12:16:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:11.406 12:16:23 -- nvmf/common.sh@295 -- # e810=() 00:22:11.406 12:16:23 -- nvmf/common.sh@295 -- # local -ga e810 00:22:11.406 12:16:23 -- nvmf/common.sh@296 -- # x722=() 00:22:11.406 12:16:23 -- nvmf/common.sh@296 -- # local -ga x722 00:22:11.406 12:16:23 -- nvmf/common.sh@297 -- # mlx=() 00:22:11.406 12:16:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:11.406 12:16:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.406 12:16:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:11.406 12:16:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:11.406 12:16:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:11.406 12:16:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:11.406 12:16:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:11.406 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:11.406 12:16:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:11.406 12:16:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:11.406 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:11.406 12:16:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:11.406 12:16:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:11.406 12:16:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.406 12:16:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:11.406 12:16:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.406 12:16:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:11.406 Found net devices under 0000:31:00.0: cvl_0_0 00:22:11.406 12:16:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.406 12:16:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:11.406 12:16:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.406 12:16:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:11.406 12:16:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.406 12:16:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:11.406 Found net devices under 0000:31:00.1: cvl_0_1 00:22:11.406 12:16:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.406 12:16:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:11.406 12:16:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:11.406 12:16:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:11.406 12:16:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:11.406 12:16:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.406 12:16:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.406 12:16:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.406 12:16:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:11.406 12:16:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.406 12:16:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.406 12:16:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:11.406 12:16:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.406 12:16:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.406 12:16:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:11.406 12:16:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:11.406 12:16:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.406 12:16:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.406 12:16:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.406 12:16:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.406 12:16:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:11.406 12:16:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.406 12:16:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.406 12:16:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.406 12:16:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:11.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:22:11.406 00:22:11.406 --- 10.0.0.2 ping statistics --- 00:22:11.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.407 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:22:11.407 12:16:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:22:11.407 00:22:11.407 --- 10.0.0.1 ping statistics --- 00:22:11.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.407 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:11.407 12:16:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.407 12:16:23 -- nvmf/common.sh@410 -- # return 0 00:22:11.407 12:16:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:11.407 12:16:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.407 12:16:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:11.407 12:16:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:11.407 12:16:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.407 12:16:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:11.407 12:16:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:11.407 12:16:23 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:11.407 12:16:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:11.407 12:16:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:11.407 12:16:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.407 12:16:23 -- nvmf/common.sh@469 -- # nvmfpid=1535603 00:22:11.407 12:16:23 -- nvmf/common.sh@470 -- # waitforlisten 1535603 00:22:11.407 12:16:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:11.407 12:16:23 -- common/autotest_common.sh@819 -- # '[' -z 1535603 ']' 00:22:11.407 12:16:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.407 12:16:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:11.407 12:16:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.407 12:16:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:11.407 12:16:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.407 [2024-06-11 12:16:23.612048] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:11.407 [2024-06-11 12:16:23.612118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.407 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.407 [2024-06-11 12:16:23.699854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.407 [2024-06-11 12:16:23.743206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:11.407 [2024-06-11 12:16:23.743350] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.407 [2024-06-11 12:16:23.743366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.407 [2024-06-11 12:16:23.743374] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.407 [2024-06-11 12:16:23.743396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.407 12:16:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:11.407 12:16:24 -- common/autotest_common.sh@852 -- # return 0 00:22:11.407 12:16:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:11.407 12:16:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:11.407 12:16:24 -- common/autotest_common.sh@10 -- # set +x 00:22:11.407 12:16:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.407 12:16:24 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:11.407 12:16:24 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:11.407 12:16:24 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:11.407 12:16:24 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:11.407 12:16:24 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:11.407 12:16:24 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:11.407 12:16:24 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:11.407 12:16:24 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.722 [2024-06-11 12:16:24.535869] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.722 [2024-06-11 12:16:24.551871] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.722 [2024-06-11 12:16:24.552064] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.722 malloc0 00:22:11.722 12:16:24 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.722 12:16:24 -- fips/fips.sh@148 -- # bdevperf_pid=1535958 00:22:11.722 12:16:24 -- fips/fips.sh@149 -- # waitforlisten 1535958 /var/tmp/bdevperf.sock 00:22:11.722 12:16:24 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:11.722 12:16:24 -- common/autotest_common.sh@819 -- # '[' -z 1535958 ']' 00:22:11.722 12:16:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.722 12:16:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:11.722 12:16:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.722 12:16:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:11.722 12:16:24 -- common/autotest_common.sh@10 -- # set +x 00:22:11.722 [2024-06-11 12:16:24.675312] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:11.722 [2024-06-11 12:16:24.675366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535958 ] 00:22:11.722 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.987 [2024-06-11 12:16:24.728591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.987 [2024-06-11 12:16:24.754891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.557 12:16:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.557 12:16:25 -- common/autotest_common.sh@852 -- # return 0 00:22:12.557 12:16:25 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:12.557 [2024-06-11 12:16:25.550379] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.816 TLSTESTn1 00:22:12.816 12:16:25 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.816 Running I/O for 10 seconds... 00:22:22.807 00:22:22.807 Latency(us) 00:22:22.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.807 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:22.807 Verification LBA range: start 0x0 length 0x2000 00:22:22.807 TLSTESTn1 : 10.01 6801.82 26.57 0.00 0.00 18799.94 3577.17 49588.91 00:22:22.807 =================================================================================================================== 00:22:22.807 Total : 6801.82 26.57 0.00 0.00 18799.94 3577.17 49588.91 00:22:22.807 0 00:22:22.807 12:16:35 -- fips/fips.sh@1 -- # cleanup 00:22:22.807 12:16:35 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:22.807 12:16:35 -- common/autotest_common.sh@796 -- # type=--id 00:22:22.807 12:16:35 -- common/autotest_common.sh@797 -- # id=0 00:22:22.807 12:16:35 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:22.807 12:16:35 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:22.807 12:16:35 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:22.807 12:16:35 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:22.807 12:16:35 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:22.808 12:16:35 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:22.808 nvmf_trace.0 00:22:23.069 12:16:35 -- common/autotest_common.sh@811 -- # return 0 00:22:23.069 12:16:35 -- fips/fips.sh@16 -- # killprocess 1535958 00:22:23.069 12:16:35 -- common/autotest_common.sh@926 -- # '[' -z 1535958 ']' 00:22:23.069 12:16:35 -- common/autotest_common.sh@930 -- # kill -0 1535958 00:22:23.069 12:16:35 -- common/autotest_common.sh@931 -- # uname 00:22:23.069 12:16:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.069 12:16:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1535958 00:22:23.069 12:16:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:23.069 12:16:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:23.069 12:16:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1535958' 00:22:23.069 killing process with pid 1535958 00:22:23.069 12:16:35 -- common/autotest_common.sh@945 -- # kill 1535958 00:22:23.069 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.069 00:22:23.069 Latency(us) 00:22:23.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.069 =================================================================================================================== 00:22:23.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.069 12:16:35 -- common/autotest_common.sh@950 -- # wait 1535958 00:22:23.069 12:16:36 -- fips/fips.sh@17 -- # nvmftestfini 00:22:23.069 12:16:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:23.069 12:16:36 -- nvmf/common.sh@116 -- # sync 00:22:23.069 12:16:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:23.069 12:16:36 -- nvmf/common.sh@119 -- # set +e 00:22:23.069 12:16:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:23.069 12:16:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:23.069 rmmod nvme_tcp 00:22:23.069 rmmod nvme_fabrics 00:22:23.069 rmmod nvme_keyring 00:22:23.069 12:16:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:23.069 12:16:36 -- nvmf/common.sh@123 -- # set -e 00:22:23.069 12:16:36 -- nvmf/common.sh@124 -- # return 0 00:22:23.069 12:16:36 -- nvmf/common.sh@477 -- # '[' -n 1535603 ']' 00:22:23.069 12:16:36 -- nvmf/common.sh@478 -- # killprocess 1535603 00:22:23.069 12:16:36 -- common/autotest_common.sh@926 -- # '[' -z 1535603 ']' 00:22:23.069 12:16:36 -- common/autotest_common.sh@930 -- # kill -0 1535603 00:22:23.069 12:16:36 -- common/autotest_common.sh@931 -- # uname 00:22:23.069 12:16:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.069 12:16:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1535603 00:22:23.329 12:16:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:23.329 12:16:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:23.329 12:16:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1535603' 00:22:23.329 killing process with pid 1535603 00:22:23.329 12:16:36 -- common/autotest_common.sh@945 -- # kill 1535603 00:22:23.329 12:16:36 -- common/autotest_common.sh@950 -- # wait 1535603 00:22:23.329 12:16:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:23.329 12:16:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:23.329 12:16:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:23.329 12:16:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.329 12:16:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:23.329 12:16:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.329 12:16:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.329 12:16:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.874 12:16:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:25.874 12:16:38 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:25.874 00:22:25.874 real 0m22.506s 00:22:25.874 user 0m23.357s 00:22:25.874 sys 0m9.690s 00:22:25.874 12:16:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.874 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:22:25.874 ************************************ 00:22:25.874 END TEST nvmf_fips 00:22:25.874 ************************************ 00:22:25.874 12:16:38 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:25.874 12:16:38 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:25.874 12:16:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:25.874 12:16:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:25.874 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:22:25.874 ************************************ 00:22:25.874 START TEST nvmf_fuzz 00:22:25.874 ************************************ 00:22:25.874 12:16:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:25.874 * Looking for test storage... 00:22:25.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.874 12:16:38 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.874 12:16:38 -- nvmf/common.sh@7 -- # uname -s 00:22:25.874 12:16:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.874 12:16:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.874 12:16:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.874 12:16:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.874 12:16:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.874 12:16:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.874 12:16:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.874 12:16:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.874 12:16:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.874 12:16:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.874 12:16:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.874 12:16:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.874 12:16:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.874 12:16:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.874 12:16:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.874 12:16:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.874 12:16:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.874 12:16:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.874 12:16:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.874 12:16:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.874 12:16:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.874 12:16:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.874 12:16:38 -- paths/export.sh@5 -- # export PATH 00:22:25.874 12:16:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.874 12:16:38 -- nvmf/common.sh@46 -- # : 0 00:22:25.874 12:16:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:25.874 12:16:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:25.874 12:16:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:25.874 12:16:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.874 12:16:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.874 12:16:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:25.874 12:16:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:25.874 12:16:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:25.874 12:16:38 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:25.874 12:16:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:25.874 12:16:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.874 12:16:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:25.874 12:16:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:25.874 12:16:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:25.874 12:16:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.874 12:16:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.874 12:16:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.874 12:16:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:25.874 12:16:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:25.874 12:16:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:25.874 12:16:38 -- common/autotest_common.sh@10 -- # set +x 00:22:32.461 12:16:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:32.461 12:16:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:32.461 12:16:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:32.461 12:16:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:32.461 12:16:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:32.461 12:16:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:32.461 12:16:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:32.461 12:16:45 -- nvmf/common.sh@294 -- # net_devs=() 00:22:32.461 12:16:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:32.461 12:16:45 -- nvmf/common.sh@295 -- # e810=() 00:22:32.461 12:16:45 -- nvmf/common.sh@295 -- # local -ga e810 00:22:32.461 12:16:45 -- nvmf/common.sh@296 -- # x722=() 00:22:32.461 12:16:45 -- nvmf/common.sh@296 -- # local -ga x722 00:22:32.461 12:16:45 -- nvmf/common.sh@297 -- # mlx=() 00:22:32.461 12:16:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:32.461 12:16:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.461 12:16:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:32.461 12:16:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:32.461 12:16:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:32.461 12:16:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.461 12:16:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:32.461 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:32.461 12:16:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.461 12:16:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:32.461 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:32.461 12:16:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:32.461 12:16:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.461 12:16:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.461 12:16:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.461 12:16:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.461 12:16:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:32.461 Found net devices under 0000:31:00.0: cvl_0_0 00:22:32.461 12:16:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.461 12:16:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.461 12:16:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.461 12:16:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.461 12:16:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.461 12:16:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:32.461 Found net devices under 0000:31:00.1: cvl_0_1 00:22:32.461 12:16:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.461 12:16:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:32.461 12:16:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:32.461 12:16:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:32.461 12:16:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:32.461 12:16:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.461 12:16:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.461 12:16:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.461 12:16:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:32.461 12:16:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.461 12:16:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.461 12:16:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:32.461 12:16:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.461 12:16:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.461 12:16:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:32.461 12:16:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:32.461 12:16:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.461 12:16:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.461 12:16:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.461 12:16:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.724 12:16:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:32.724 12:16:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.724 12:16:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.724 12:16:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.724 12:16:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:32.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:22:32.724 00:22:32.724 --- 10.0.0.2 ping statistics --- 00:22:32.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.724 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:22:32.724 12:16:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:22:32.724 00:22:32.724 --- 10.0.0.1 ping statistics --- 00:22:32.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.724 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:22:32.724 12:16:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.724 12:16:45 -- nvmf/common.sh@410 -- # return 0 00:22:32.724 12:16:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:32.724 12:16:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.724 12:16:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:32.724 12:16:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:32.724 12:16:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.724 12:16:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:32.724 12:16:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:32.724 12:16:45 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1542397 00:22:32.724 12:16:45 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:32.724 12:16:45 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:32.724 12:16:45 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1542397 00:22:32.724 12:16:45 -- common/autotest_common.sh@819 -- # '[' -z 1542397 ']' 00:22:32.724 12:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.724 12:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.724 12:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.724 12:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.724 12:16:45 -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 12:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.667 12:16:46 -- common/autotest_common.sh@852 -- # return 0 00:22:33.667 12:16:46 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.667 12:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.667 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 12:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.667 12:16:46 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:33.667 12:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.667 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 Malloc0 00:22:33.667 12:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.667 12:16:46 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.667 12:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.667 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 12:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.667 12:16:46 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.667 12:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.667 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 12:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.667 12:16:46 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.667 12:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.667 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 12:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.667 12:16:46 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:33.667 12:16:46 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:05.778 Fuzzing completed. Shutting down the fuzz application 00:23:05.778 00:23:05.778 Dumping successful admin opcodes: 00:23:05.778 8, 9, 10, 24, 00:23:05.778 Dumping successful io opcodes: 00:23:05.778 0, 9, 00:23:05.778 NS: 0x200003aeff00 I/O qp, Total commands completed: 931815, total successful commands: 5433, random_seed: 60622336 00:23:05.778 NS: 0x200003aeff00 admin qp, Total commands completed: 116915, total successful commands: 957, random_seed: 2714954048 00:23:05.779 12:17:16 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:05.779 Fuzzing completed. Shutting down the fuzz application 00:23:05.779 00:23:05.779 Dumping successful admin opcodes: 00:23:05.779 24, 00:23:05.779 Dumping successful io opcodes: 00:23:05.779 00:23:05.779 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1987532157 00:23:05.779 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1987605249 00:23:05.779 12:17:18 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.779 12:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.779 12:17:18 -- common/autotest_common.sh@10 -- # set +x 00:23:05.779 12:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.779 12:17:18 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:05.779 12:17:18 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:05.779 12:17:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:05.779 12:17:18 -- nvmf/common.sh@116 -- # sync 00:23:05.779 12:17:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:05.779 12:17:18 -- nvmf/common.sh@119 -- # set +e 00:23:05.779 12:17:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:05.779 12:17:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:05.779 rmmod nvme_tcp 00:23:05.779 rmmod nvme_fabrics 00:23:05.779 rmmod nvme_keyring 00:23:05.779 12:17:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:05.779 12:17:18 -- nvmf/common.sh@123 -- # set -e 00:23:05.779 12:17:18 -- nvmf/common.sh@124 -- # return 0 00:23:05.779 12:17:18 -- nvmf/common.sh@477 -- # '[' -n 1542397 ']' 00:23:05.779 12:17:18 -- nvmf/common.sh@478 -- # killprocess 1542397 00:23:05.779 12:17:18 -- common/autotest_common.sh@926 -- # '[' -z 1542397 ']' 00:23:05.779 12:17:18 -- common/autotest_common.sh@930 -- # kill -0 1542397 00:23:05.779 12:17:18 -- common/autotest_common.sh@931 -- # uname 00:23:05.779 12:17:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:05.779 12:17:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1542397 00:23:05.779 12:17:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:05.779 12:17:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:05.779 12:17:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1542397' 00:23:05.779 killing process with pid 1542397 00:23:05.779 12:17:18 -- common/autotest_common.sh@945 -- # kill 1542397 00:23:05.779 12:17:18 -- common/autotest_common.sh@950 -- # wait 1542397 00:23:05.779 12:17:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:05.779 12:17:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:05.779 12:17:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:05.779 12:17:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.779 12:17:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:05.779 12:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.779 12:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.779 12:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.688 12:17:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:07.688 12:17:20 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:07.688 00:23:07.688 real 0m42.198s 00:23:07.688 user 0m56.205s 00:23:07.688 sys 0m15.152s 00:23:07.688 12:17:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.688 12:17:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.688 ************************************ 00:23:07.688 END TEST nvmf_fuzz 00:23:07.688 ************************************ 00:23:07.688 12:17:20 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:07.688 12:17:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:07.688 12:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:07.688 12:17:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.688 ************************************ 00:23:07.688 START TEST nvmf_multiconnection 00:23:07.688 ************************************ 00:23:07.688 12:17:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:07.688 * Looking for test storage... 00:23:07.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:07.688 12:17:20 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.949 12:17:20 -- nvmf/common.sh@7 -- # uname -s 00:23:07.949 12:17:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.949 12:17:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.949 12:17:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.949 12:17:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.949 12:17:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.949 12:17:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.949 12:17:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.949 12:17:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.949 12:17:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.949 12:17:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.949 12:17:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.949 12:17:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.949 12:17:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.949 12:17:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.949 12:17:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.949 12:17:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.949 12:17:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.949 12:17:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.949 12:17:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.949 12:17:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.949 12:17:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.949 12:17:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.949 12:17:20 -- paths/export.sh@5 -- # export PATH 00:23:07.949 12:17:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.949 12:17:20 -- nvmf/common.sh@46 -- # : 0 00:23:07.949 12:17:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:07.949 12:17:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:07.949 12:17:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:07.949 12:17:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.949 12:17:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.949 12:17:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:07.949 12:17:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:07.949 12:17:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:07.949 12:17:20 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.949 12:17:20 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.949 12:17:20 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:07.949 12:17:20 -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:07.949 12:17:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:07.949 12:17:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.949 12:17:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:07.949 12:17:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:07.949 12:17:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:07.949 12:17:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.949 12:17:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.949 12:17:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.949 12:17:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:07.949 12:17:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:07.949 12:17:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:07.949 12:17:20 -- common/autotest_common.sh@10 -- # set +x 00:23:16.112 12:17:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:16.112 12:17:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:16.112 12:17:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:16.112 12:17:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:16.112 12:17:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:16.112 12:17:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:16.112 12:17:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:16.112 12:17:27 -- nvmf/common.sh@294 -- # net_devs=() 00:23:16.112 12:17:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:16.112 12:17:27 -- nvmf/common.sh@295 -- # e810=() 00:23:16.112 12:17:27 -- nvmf/common.sh@295 -- # local -ga e810 00:23:16.112 12:17:27 -- nvmf/common.sh@296 -- # x722=() 00:23:16.112 12:17:27 -- nvmf/common.sh@296 -- # local -ga x722 00:23:16.112 12:17:27 -- nvmf/common.sh@297 -- # mlx=() 00:23:16.112 12:17:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:16.112 12:17:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.112 12:17:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:16.112 12:17:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:16.112 12:17:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:16.112 12:17:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.112 12:17:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:16.112 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:16.112 12:17:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.112 12:17:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:16.112 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:16.112 12:17:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:16.112 12:17:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.112 12:17:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.112 12:17:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.112 12:17:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.112 12:17:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:16.112 Found net devices under 0000:31:00.0: cvl_0_0 00:23:16.112 12:17:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.112 12:17:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.112 12:17:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.112 12:17:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.112 12:17:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.112 12:17:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:16.112 Found net devices under 0000:31:00.1: cvl_0_1 00:23:16.112 12:17:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.112 12:17:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:16.112 12:17:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:16.112 12:17:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:16.112 12:17:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:16.112 12:17:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.112 12:17:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.112 12:17:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.112 12:17:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:16.112 12:17:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.112 12:17:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.112 12:17:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:16.112 12:17:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.112 12:17:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.112 12:17:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:16.112 12:17:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:16.112 12:17:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.112 12:17:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.112 12:17:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.112 12:17:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.112 12:17:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:16.112 12:17:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.112 12:17:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.112 12:17:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.112 12:17:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:16.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:23:16.112 00:23:16.112 --- 10.0.0.2 ping statistics --- 00:23:16.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.112 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:23:16.112 12:17:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:23:16.112 00:23:16.112 --- 10.0.0.1 ping statistics --- 00:23:16.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.112 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:23:16.112 12:17:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.112 12:17:28 -- nvmf/common.sh@410 -- # return 0 00:23:16.112 12:17:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:16.112 12:17:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.112 12:17:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:16.112 12:17:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:16.112 12:17:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.112 12:17:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:16.112 12:17:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:16.112 12:17:28 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:16.112 12:17:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:16.112 12:17:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:16.112 12:17:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.112 12:17:28 -- nvmf/common.sh@469 -- # nvmfpid=1552889 00:23:16.112 12:17:28 -- nvmf/common.sh@470 -- # waitforlisten 1552889 00:23:16.113 12:17:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:16.113 12:17:28 -- common/autotest_common.sh@819 -- # '[' -z 1552889 ']' 00:23:16.113 12:17:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.113 12:17:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:16.113 12:17:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.113 12:17:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:16.113 12:17:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 [2024-06-11 12:17:28.182146] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:16.113 [2024-06-11 12:17:28.182212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.113 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.113 [2024-06-11 12:17:28.253927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.113 [2024-06-11 12:17:28.292370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:16.113 [2024-06-11 12:17:28.292520] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.113 [2024-06-11 12:17:28.292531] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.113 [2024-06-11 12:17:28.292540] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.113 [2024-06-11 12:17:28.292693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.113 [2024-06-11 12:17:28.292812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.113 [2024-06-11 12:17:28.292970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.113 [2024-06-11 12:17:28.292972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.113 12:17:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:16.113 12:17:28 -- common/autotest_common.sh@852 -- # return 0 00:23:16.113 12:17:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:16.113 12:17:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:16.113 12:17:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 12:17:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.113 12:17:28 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.113 12:17:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 [2024-06-11 12:17:29.002354] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@21 -- # seq 1 11 00:23:16.113 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.113 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 Malloc1 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 [2024-06-11 12:17:29.065662] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.113 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 Malloc2 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.113 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.113 Malloc3 00:23:16.113 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.113 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:16.113 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.113 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.374 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 Malloc4 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.374 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 Malloc5 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.374 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 Malloc6 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.374 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 Malloc7 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.374 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.374 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.374 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:16.374 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.374 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.636 Malloc8 00:23:16.636 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.636 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:16.636 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.636 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.636 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.636 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:16.636 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.636 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.636 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.636 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:16.636 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.636 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.636 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.636 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.636 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:16.636 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.636 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.636 Malloc9 00:23:16.636 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.636 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:16.636 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.636 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.636 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.636 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:16.636 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.636 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.636 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.636 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:16.636 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.637 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 Malloc10 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.637 12:17:29 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 Malloc11 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:16.637 12:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:16.637 12:17:29 -- common/autotest_common.sh@10 -- # set +x 00:23:16.637 12:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:16.637 12:17:29 -- target/multiconnection.sh@28 -- # seq 1 11 00:23:16.637 12:17:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.637 12:17:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:18.580 12:17:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:18.580 12:17:31 -- common/autotest_common.sh@1177 -- # local i=0 00:23:18.580 12:17:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:18.580 12:17:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:18.580 12:17:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:20.488 12:17:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:20.488 12:17:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:20.488 12:17:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:23:20.488 12:17:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:20.488 12:17:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:20.488 12:17:33 -- common/autotest_common.sh@1187 -- # return 0 00:23:20.488 12:17:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:20.488 12:17:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:21.872 12:17:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:21.872 12:17:34 -- common/autotest_common.sh@1177 -- # local i=0 00:23:21.872 12:17:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.872 12:17:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:21.872 12:17:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:23.785 12:17:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:23.785 12:17:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:23.785 12:17:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:23:23.785 12:17:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:23.785 12:17:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:23.785 12:17:36 -- common/autotest_common.sh@1187 -- # return 0 00:23:23.785 12:17:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.785 12:17:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:25.699 12:17:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:25.699 12:17:38 -- common/autotest_common.sh@1177 -- # local i=0 00:23:25.699 12:17:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:25.699 12:17:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:25.699 12:17:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:27.610 12:17:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:27.610 12:17:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:27.610 12:17:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:23:27.610 12:17:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:27.610 12:17:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:27.610 12:17:40 -- common/autotest_common.sh@1187 -- # return 0 00:23:27.610 12:17:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.610 12:17:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:28.991 12:17:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:28.991 12:17:41 -- common/autotest_common.sh@1177 -- # local i=0 00:23:28.991 12:17:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:28.991 12:17:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:28.991 12:17:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:30.901 12:17:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:30.901 12:17:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:30.901 12:17:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:23:30.901 12:17:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:30.901 12:17:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:30.901 12:17:43 -- common/autotest_common.sh@1187 -- # return 0 00:23:30.901 12:17:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.901 12:17:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:32.811 12:17:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:32.811 12:17:45 -- common/autotest_common.sh@1177 -- # local i=0 00:23:32.811 12:17:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:32.811 12:17:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:32.811 12:17:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:34.721 12:17:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:34.721 12:17:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:34.721 12:17:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:23:34.721 12:17:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:34.721 12:17:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:34.721 12:17:47 -- common/autotest_common.sh@1187 -- # return 0 00:23:34.721 12:17:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.721 12:17:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:36.634 12:17:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:36.634 12:17:49 -- common/autotest_common.sh@1177 -- # local i=0 00:23:36.634 12:17:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.634 12:17:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:36.634 12:17:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:38.545 12:17:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:38.545 12:17:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:38.545 12:17:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:23:38.545 12:17:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:38.545 12:17:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:38.545 12:17:51 -- common/autotest_common.sh@1187 -- # return 0 00:23:38.545 12:17:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:38.545 12:17:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:39.927 12:17:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:39.927 12:17:52 -- common/autotest_common.sh@1177 -- # local i=0 00:23:39.927 12:17:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:39.927 12:17:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:39.927 12:17:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:42.470 12:17:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:42.470 12:17:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:42.470 12:17:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:23:42.470 12:17:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:42.470 12:17:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:42.471 12:17:54 -- common/autotest_common.sh@1187 -- # return 0 00:23:42.471 12:17:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:42.471 12:17:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:43.854 12:17:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:43.854 12:17:56 -- common/autotest_common.sh@1177 -- # local i=0 00:23:43.854 12:17:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:43.854 12:17:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:43.854 12:17:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:45.765 12:17:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:45.765 12:17:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:45.765 12:17:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:23:45.765 12:17:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:45.765 12:17:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:45.765 12:17:58 -- common/autotest_common.sh@1187 -- # return 0 00:23:45.765 12:17:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.765 12:17:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:47.676 12:18:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:47.676 12:18:00 -- common/autotest_common.sh@1177 -- # local i=0 00:23:47.676 12:18:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:47.676 12:18:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:47.676 12:18:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:49.629 12:18:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:49.629 12:18:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:49.629 12:18:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:23:49.629 12:18:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:49.629 12:18:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:49.629 12:18:02 -- common/autotest_common.sh@1187 -- # return 0 00:23:49.629 12:18:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.629 12:18:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:51.554 12:18:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:51.554 12:18:04 -- common/autotest_common.sh@1177 -- # local i=0 00:23:51.554 12:18:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:51.554 12:18:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:51.554 12:18:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:53.467 12:18:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:53.467 12:18:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:53.467 12:18:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:23:53.467 12:18:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:53.467 12:18:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:53.467 12:18:06 -- common/autotest_common.sh@1187 -- # return 0 00:23:53.467 12:18:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.467 12:18:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:55.378 12:18:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:55.378 12:18:08 -- common/autotest_common.sh@1177 -- # local i=0 00:23:55.378 12:18:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.378 12:18:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:55.378 12:18:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:57.292 12:18:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:57.292 12:18:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:57.292 12:18:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:23:57.292 12:18:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:57.292 12:18:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.292 12:18:10 -- common/autotest_common.sh@1187 -- # return 0 00:23:57.292 12:18:10 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:57.292 [global] 00:23:57.292 thread=1 00:23:57.292 invalidate=1 00:23:57.292 rw=read 00:23:57.292 time_based=1 00:23:57.292 runtime=10 00:23:57.292 ioengine=libaio 00:23:57.292 direct=1 00:23:57.292 bs=262144 00:23:57.292 iodepth=64 00:23:57.292 norandommap=1 00:23:57.292 numjobs=1 00:23:57.292 00:23:57.292 [job0] 00:23:57.292 filename=/dev/nvme0n1 00:23:57.292 [job1] 00:23:57.292 filename=/dev/nvme10n1 00:23:57.292 [job2] 00:23:57.292 filename=/dev/nvme1n1 00:23:57.292 [job3] 00:23:57.292 filename=/dev/nvme2n1 00:23:57.292 [job4] 00:23:57.292 filename=/dev/nvme3n1 00:23:57.292 [job5] 00:23:57.292 filename=/dev/nvme4n1 00:23:57.292 [job6] 00:23:57.292 filename=/dev/nvme5n1 00:23:57.292 [job7] 00:23:57.292 filename=/dev/nvme6n1 00:23:57.292 [job8] 00:23:57.292 filename=/dev/nvme7n1 00:23:57.292 [job9] 00:23:57.292 filename=/dev/nvme8n1 00:23:57.292 [job10] 00:23:57.292 filename=/dev/nvme9n1 00:23:57.552 Could not set queue depth (nvme0n1) 00:23:57.552 Could not set queue depth (nvme10n1) 00:23:57.552 Could not set queue depth (nvme1n1) 00:23:57.552 Could not set queue depth (nvme2n1) 00:23:57.552 Could not set queue depth (nvme3n1) 00:23:57.552 Could not set queue depth (nvme4n1) 00:23:57.552 Could not set queue depth (nvme5n1) 00:23:57.552 Could not set queue depth (nvme6n1) 00:23:57.552 Could not set queue depth (nvme7n1) 00:23:57.552 Could not set queue depth (nvme8n1) 00:23:57.552 Could not set queue depth (nvme9n1) 00:23:57.813 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.813 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.813 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.813 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.813 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.813 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.813 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.813 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.814 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.814 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.814 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:57.814 fio-3.35 00:23:57.814 Starting 11 threads 00:24:10.045 00:24:10.045 job0: (groupid=0, jobs=1): err= 0: pid=1562174: Tue Jun 11 12:18:21 2024 00:24:10.045 read: IOPS=608, BW=152MiB/s (160MB/s)(1538MiB/10107msec) 00:24:10.045 slat (usec): min=6, max=102436, avg=1445.70, stdev=4844.68 00:24:10.045 clat (msec): min=4, max=216, avg=103.54, stdev=33.72 00:24:10.045 lat (msec): min=5, max=216, avg=104.98, stdev=34.28 00:24:10.045 clat percentiles (msec): 00:24:10.045 | 1.00th=[ 19], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 75], 00:24:10.045 | 30.00th=[ 95], 40.00th=[ 102], 50.00th=[ 108], 60.00th=[ 114], 00:24:10.045 | 70.00th=[ 120], 80.00th=[ 129], 90.00th=[ 142], 95.00th=[ 161], 00:24:10.045 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 213], 99.95th=[ 218], 00:24:10.045 | 99.99th=[ 218] 00:24:10.045 bw ( KiB/s): min=101888, max=314880, per=6.50%, avg=155852.80, stdev=44698.57, samples=20 00:24:10.045 iops : min= 398, max= 1230, avg=608.80, stdev=174.60, samples=20 00:24:10.045 lat (msec) : 10=0.34%, 20=0.86%, 50=6.47%, 100=30.04%, 250=62.29% 00:24:10.045 cpu : usr=0.23%, sys=1.89%, ctx=1471, majf=0, minf=4097 00:24:10.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.045 issued rwts: total=6152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.045 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.045 job1: (groupid=0, jobs=1): err= 0: pid=1562189: Tue Jun 11 12:18:21 2024 00:24:10.045 read: IOPS=1485, BW=371MiB/s (389MB/s)(3718MiB/10012msec) 00:24:10.045 slat (usec): min=5, max=97651, avg=638.48, stdev=2142.44 00:24:10.045 clat (msec): min=2, max=214, avg=42.40, stdev=26.92 00:24:10.045 lat (msec): min=2, max=214, avg=43.03, stdev=27.28 00:24:10.045 clat percentiles (msec): 00:24:10.045 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 25], 00:24:10.045 | 30.00th=[ 26], 40.00th=[ 28], 50.00th=[ 35], 60.00th=[ 41], 00:24:10.045 | 70.00th=[ 45], 80.00th=[ 52], 90.00th=[ 73], 95.00th=[ 108], 00:24:10.045 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 178], 00:24:10.045 | 99.99th=[ 182] 00:24:10.045 bw ( KiB/s): min=142336, max=652800, per=15.81%, avg=379110.40, stdev=149251.98, samples=20 00:24:10.045 iops : min= 556, max= 2550, avg=1480.90, stdev=583.02, samples=20 00:24:10.045 lat (msec) : 4=0.18%, 10=0.92%, 20=2.00%, 50=75.22%, 100=15.37% 00:24:10.045 lat (msec) : 250=6.31% 00:24:10.045 cpu : usr=0.40%, sys=4.43%, ctx=2893, majf=0, minf=4097 00:24:10.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.045 issued rwts: total=14872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.045 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.045 job2: (groupid=0, jobs=1): err= 0: pid=1562208: Tue Jun 11 12:18:21 2024 00:24:10.045 read: IOPS=820, BW=205MiB/s (215MB/s)(2071MiB/10095msec) 00:24:10.045 slat (usec): min=5, max=90826, avg=1061.06, stdev=3813.77 00:24:10.045 clat (usec): min=1975, max=250589, avg=76867.93, stdev=34737.04 00:24:10.045 lat (msec): min=2, max=253, avg=77.93, stdev=35.26 00:24:10.045 clat percentiles (msec): 00:24:10.045 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 45], 00:24:10.045 | 30.00th=[ 60], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 85], 00:24:10.045 | 70.00th=[ 92], 80.00th=[ 102], 90.00th=[ 122], 95.00th=[ 134], 00:24:10.045 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 203], 99.95th=[ 203], 00:24:10.045 | 99.99th=[ 251] 00:24:10.045 bw ( KiB/s): min=128000, max=372736, per=8.77%, avg=210380.80, stdev=66847.08, samples=20 00:24:10.045 iops : min= 500, max= 1456, avg=821.80, stdev=261.12, samples=20 00:24:10.045 lat (msec) : 2=0.02%, 4=0.21%, 10=1.62%, 20=2.84%, 50=18.46% 00:24:10.045 lat (msec) : 100=55.99%, 250=20.84%, 500=0.02% 00:24:10.045 cpu : usr=0.35%, sys=2.28%, ctx=1891, majf=0, minf=4097 00:24:10.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.045 issued rwts: total=8282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.045 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.045 job3: (groupid=0, jobs=1): err= 0: pid=1562220: Tue Jun 11 12:18:21 2024 00:24:10.045 read: IOPS=889, BW=222MiB/s (233MB/s)(2236MiB/10051msec) 00:24:10.045 slat (usec): min=6, max=90508, avg=969.74, stdev=3186.40 00:24:10.045 clat (msec): min=2, max=180, avg=70.89, stdev=28.15 00:24:10.045 lat (msec): min=3, max=223, avg=71.86, stdev=28.56 00:24:10.045 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 41], 20.00th=[ 47], 00:24:10.046 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 74], 00:24:10.046 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 125], 00:24:10.046 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 159], 00:24:10.046 | 99.99th=[ 182] 00:24:10.046 bw ( KiB/s): min=131072, max=362496, per=9.48%, avg=227302.40, stdev=62254.36, samples=20 00:24:10.046 iops : min= 512, max= 1416, avg=887.90, stdev=243.18, samples=20 00:24:10.046 lat (msec) : 4=0.03%, 10=0.35%, 20=1.61%, 50=23.27%, 100=57.66% 00:24:10.046 lat (msec) : 250=17.08% 00:24:10.046 cpu : usr=0.36%, sys=2.79%, ctx=2048, majf=0, minf=4097 00:24:10.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:10.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.046 issued rwts: total=8942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.046 job4: (groupid=0, jobs=1): err= 0: pid=1562226: Tue Jun 11 12:18:21 2024 00:24:10.046 read: IOPS=1056, BW=264MiB/s (277MB/s)(2667MiB/10099msec) 00:24:10.046 slat (usec): min=5, max=123843, avg=797.14, stdev=4047.58 00:24:10.046 clat (usec): min=641, max=287648, avg=59686.75, stdev=42639.56 00:24:10.046 lat (usec): min=691, max=291804, avg=60483.90, stdev=43297.95 00:24:10.046 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 22], 20.00th=[ 26], 00:24:10.046 | 30.00th=[ 28], 40.00th=[ 32], 50.00th=[ 42], 60.00th=[ 62], 00:24:10.046 | 70.00th=[ 80], 80.00th=[ 101], 90.00th=[ 123], 95.00th=[ 146], 00:24:10.046 | 99.00th=[ 169], 99.50th=[ 186], 99.90th=[ 205], 99.95th=[ 259], 00:24:10.046 | 99.99th=[ 288] 00:24:10.046 bw ( KiB/s): min=94208, max=545792, per=11.32%, avg=271488.00, stdev=138566.46, samples=20 00:24:10.046 iops : min= 368, max= 2132, avg=1060.50, stdev=541.28, samples=20 00:24:10.046 lat (usec) : 750=0.01%, 1000=0.03% 00:24:10.046 lat (msec) : 2=0.46%, 4=0.46%, 10=2.70%, 20=5.48%, 50=45.47% 00:24:10.046 lat (msec) : 100=25.71%, 250=19.62%, 500=0.07% 00:24:10.046 cpu : usr=0.35%, sys=2.67%, ctx=2443, majf=0, minf=4097 00:24:10.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:24:10.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.046 issued rwts: total=10669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.046 job5: (groupid=0, jobs=1): err= 0: pid=1562250: Tue Jun 11 12:18:21 2024 00:24:10.046 read: IOPS=575, BW=144MiB/s (151MB/s)(1453MiB/10101msec) 00:24:10.046 slat (usec): min=6, max=64922, avg=1721.92, stdev=4551.35 00:24:10.046 clat (msec): min=16, max=226, avg=109.40, stdev=27.63 00:24:10.046 lat (msec): min=17, max=226, avg=111.13, stdev=28.15 00:24:10.046 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 71], 20.00th=[ 91], 00:24:10.046 | 30.00th=[ 99], 40.00th=[ 104], 50.00th=[ 110], 60.00th=[ 116], 00:24:10.046 | 70.00th=[ 124], 80.00th=[ 131], 90.00th=[ 142], 95.00th=[ 159], 00:24:10.046 | 99.00th=[ 171], 99.50th=[ 186], 99.90th=[ 199], 99.95th=[ 199], 00:24:10.046 | 99.99th=[ 228] 00:24:10.046 bw ( KiB/s): min=96768, max=261632, per=6.14%, avg=147097.60, stdev=34202.74, samples=20 00:24:10.046 iops : min= 378, max= 1022, avg=574.60, stdev=133.60, samples=20 00:24:10.046 lat (msec) : 20=0.09%, 50=1.58%, 100=32.81%, 250=65.52% 00:24:10.046 cpu : usr=0.22%, sys=2.19%, ctx=1329, majf=0, minf=3534 00:24:10.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:10.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.046 issued rwts: total=5810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.046 job6: (groupid=0, jobs=1): err= 0: pid=1562262: Tue Jun 11 12:18:21 2024 00:24:10.046 read: IOPS=916, BW=229MiB/s (240MB/s)(2303MiB/10055msec) 00:24:10.046 slat (usec): min=5, max=80105, avg=1009.40, stdev=2919.40 00:24:10.046 clat (msec): min=2, max=187, avg=68.73, stdev=28.29 00:24:10.046 lat (msec): min=2, max=187, avg=69.74, stdev=28.65 00:24:10.046 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 50], 00:24:10.046 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 78], 00:24:10.046 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 116], 00:24:10.046 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 169], 00:24:10.046 | 99.99th=[ 188] 00:24:10.046 bw ( KiB/s): min=140288, max=477184, per=9.77%, avg=234240.00, stdev=84942.07, samples=20 00:24:10.046 iops : min= 548, max= 1864, avg=915.00, stdev=331.80, samples=20 00:24:10.046 lat (msec) : 4=0.39%, 10=2.93%, 20=1.73%, 50=15.47%, 100=70.38% 00:24:10.046 lat (msec) : 250=9.11% 00:24:10.046 cpu : usr=0.32%, sys=3.15%, ctx=1987, majf=0, minf=4097 00:24:10.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:10.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.046 issued rwts: total=9213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.046 job7: (groupid=0, jobs=1): err= 0: pid=1562271: Tue Jun 11 12:18:21 2024 00:24:10.046 read: IOPS=726, BW=182MiB/s (190MB/s)(1826MiB/10052msec) 00:24:10.046 slat (usec): min=5, max=82563, avg=1185.95, stdev=3687.27 00:24:10.046 clat (usec): min=1488, max=203107, avg=86817.10, stdev=33662.32 00:24:10.046 lat (usec): min=1522, max=212170, avg=88003.05, stdev=34218.89 00:24:10.046 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 47], 20.00th=[ 65], 00:24:10.046 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 91], 00:24:10.046 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 131], 95.00th=[ 150], 00:24:10.046 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 197], 99.95th=[ 199], 00:24:10.046 | 99.99th=[ 203] 00:24:10.046 bw ( KiB/s): min=103424, max=334848, per=7.73%, avg=185326.45, stdev=56887.99, samples=20 00:24:10.046 iops : min= 404, max= 1308, avg=723.90, stdev=222.22, samples=20 00:24:10.046 lat (msec) : 2=0.03%, 4=0.81%, 10=0.77%, 20=1.89%, 50=8.00% 00:24:10.046 lat (msec) : 100=60.16%, 250=28.35% 00:24:10.046 cpu : usr=0.30%, sys=2.30%, ctx=1730, majf=0, minf=4097 00:24:10.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:10.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.046 issued rwts: total=7304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.046 job8: (groupid=0, jobs=1): err= 0: pid=1562301: Tue Jun 11 12:18:21 2024 00:24:10.046 read: IOPS=835, BW=209MiB/s (219MB/s)(2100MiB/10052msec) 00:24:10.046 slat (usec): min=5, max=105468, avg=1021.68, stdev=4483.55 00:24:10.046 clat (usec): min=1840, max=266404, avg=75453.33, stdev=45524.15 00:24:10.046 lat (usec): min=1887, max=269576, avg=76475.01, stdev=46309.63 00:24:10.046 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 32], 00:24:10.046 | 30.00th=[ 37], 40.00th=[ 48], 50.00th=[ 63], 60.00th=[ 94], 00:24:10.046 | 70.00th=[ 110], 80.00th=[ 123], 90.00th=[ 138], 95.00th=[ 153], 00:24:10.046 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 239], 99.95th=[ 257], 00:24:10.046 | 99.99th=[ 268] 00:24:10.046 bw ( KiB/s): min=104448, max=529408, per=8.90%, avg=213427.20, stdev=100838.39, samples=20 00:24:10.046 iops : min= 408, max= 2068, avg=833.70, stdev=393.90, samples=20 00:24:10.046 lat (msec) : 2=0.01%, 4=0.21%, 10=1.74%, 20=2.98%, 50=37.49% 00:24:10.046 lat (msec) : 100=20.51%, 250=36.98%, 500=0.08% 00:24:10.046 cpu : usr=0.31%, sys=2.39%, ctx=1815, majf=0, minf=4097 00:24:10.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:10.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.046 issued rwts: total=8400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.046 job9: (groupid=0, jobs=1): err= 0: pid=1562314: Tue Jun 11 12:18:21 2024 00:24:10.046 read: IOPS=668, BW=167MiB/s (175MB/s)(1687MiB/10099msec) 00:24:10.046 slat (usec): min=5, max=84008, avg=1334.50, stdev=4384.98 00:24:10.046 clat (msec): min=4, max=222, avg=94.35, stdev=37.71 00:24:10.046 lat (msec): min=4, max=222, avg=95.68, stdev=38.39 00:24:10.046 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 42], 20.00th=[ 59], 00:24:10.046 | 30.00th=[ 73], 40.00th=[ 90], 50.00th=[ 101], 60.00th=[ 109], 00:24:10.046 | 70.00th=[ 116], 80.00th=[ 126], 90.00th=[ 138], 95.00th=[ 153], 00:24:10.046 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 218], 99.95th=[ 218], 00:24:10.046 | 99.99th=[ 222] 00:24:10.046 bw ( KiB/s): min=101376, max=384512, per=7.14%, avg=171183.45, stdev=60820.39, samples=20 00:24:10.046 iops : min= 396, max= 1502, avg=668.65, stdev=237.55, samples=20 00:24:10.046 lat (msec) : 10=0.49%, 20=2.98%, 50=12.51%, 100=33.63%, 250=50.39% 00:24:10.046 cpu : usr=0.22%, sys=2.09%, ctx=1590, majf=0, minf=4097 00:24:10.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:10.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.046 issued rwts: total=6749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.046 job10: (groupid=0, jobs=1): err= 0: pid=1562322: Tue Jun 11 12:18:21 2024 00:24:10.046 read: IOPS=824, BW=206MiB/s (216MB/s)(2066MiB/10028msec) 00:24:10.046 slat (usec): min=6, max=68923, avg=1002.96, stdev=3345.57 00:24:10.046 clat (usec): min=1700, max=185995, avg=76587.45, stdev=31815.39 00:24:10.046 lat (usec): min=1749, max=188731, avg=77590.41, stdev=32264.29 00:24:10.046 clat percentiles (msec): 00:24:10.046 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 30], 20.00th=[ 54], 00:24:10.046 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:24:10.046 | 70.00th=[ 92], 80.00th=[ 103], 90.00th=[ 117], 95.00th=[ 128], 00:24:10.046 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 176], 00:24:10.046 | 99.99th=[ 186] 00:24:10.046 bw ( KiB/s): min=125691, max=304640, per=8.76%, avg=209958.15, stdev=51190.75, samples=20 00:24:10.046 iops : min= 490, max= 1190, avg=820.10, stdev=200.05, samples=20 00:24:10.047 lat (msec) : 2=0.06%, 4=0.80%, 10=2.93%, 20=3.41%, 50=10.65% 00:24:10.047 lat (msec) : 100=60.09%, 250=22.06% 00:24:10.047 cpu : usr=0.29%, sys=2.50%, ctx=1944, majf=0, minf=4097 00:24:10.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:10.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:10.047 issued rwts: total=8264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:10.047 00:24:10.047 Run status group 0 (all jobs): 00:24:10.047 READ: bw=2341MiB/s (2455MB/s), 144MiB/s-371MiB/s (151MB/s-389MB/s), io=23.1GiB (24.8GB), run=10012-10107msec 00:24:10.047 00:24:10.047 Disk stats (read/write): 00:24:10.047 nvme0n1: ios=11922/0, merge=0/0, ticks=1220875/0, in_queue=1220875, util=96.45% 00:24:10.047 nvme10n1: ios=29116/0, merge=0/0, ticks=1221991/0, in_queue=1221991, util=96.71% 00:24:10.047 nvme1n1: ios=16260/0, merge=0/0, ticks=1217615/0, in_queue=1217615, util=97.04% 00:24:10.047 nvme2n1: ios=17492/0, merge=0/0, ticks=1223897/0, in_queue=1223897, util=97.29% 00:24:10.047 nvme3n1: ios=21056/0, merge=0/0, ticks=1218666/0, in_queue=1218666, util=97.40% 00:24:10.047 nvme4n1: ios=11365/0, merge=0/0, ticks=1210978/0, in_queue=1210978, util=97.88% 00:24:10.047 nvme5n1: ios=18043/0, merge=0/0, ticks=1221566/0, in_queue=1221566, util=98.06% 00:24:10.047 nvme6n1: ios=14182/0, merge=0/0, ticks=1220808/0, in_queue=1220808, util=98.18% 00:24:10.047 nvme7n1: ios=16478/0, merge=0/0, ticks=1220192/0, in_queue=1220192, util=98.77% 00:24:10.047 nvme8n1: ios=13498/0, merge=0/0, ticks=1249888/0, in_queue=1249888, util=98.99% 00:24:10.047 nvme9n1: ios=15865/0, merge=0/0, ticks=1220188/0, in_queue=1220188, util=99.20% 00:24:10.047 12:18:21 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:10.047 [global] 00:24:10.047 thread=1 00:24:10.047 invalidate=1 00:24:10.047 rw=randwrite 00:24:10.047 time_based=1 00:24:10.047 runtime=10 00:24:10.047 ioengine=libaio 00:24:10.047 direct=1 00:24:10.047 bs=262144 00:24:10.047 iodepth=64 00:24:10.047 norandommap=1 00:24:10.047 numjobs=1 00:24:10.047 00:24:10.047 [job0] 00:24:10.047 filename=/dev/nvme0n1 00:24:10.047 [job1] 00:24:10.047 filename=/dev/nvme10n1 00:24:10.047 [job2] 00:24:10.047 filename=/dev/nvme1n1 00:24:10.047 [job3] 00:24:10.047 filename=/dev/nvme2n1 00:24:10.047 [job4] 00:24:10.047 filename=/dev/nvme3n1 00:24:10.047 [job5] 00:24:10.047 filename=/dev/nvme4n1 00:24:10.047 [job6] 00:24:10.047 filename=/dev/nvme5n1 00:24:10.047 [job7] 00:24:10.047 filename=/dev/nvme6n1 00:24:10.047 [job8] 00:24:10.047 filename=/dev/nvme7n1 00:24:10.047 [job9] 00:24:10.047 filename=/dev/nvme8n1 00:24:10.047 [job10] 00:24:10.047 filename=/dev/nvme9n1 00:24:10.047 Could not set queue depth (nvme0n1) 00:24:10.047 Could not set queue depth (nvme10n1) 00:24:10.047 Could not set queue depth (nvme1n1) 00:24:10.047 Could not set queue depth (nvme2n1) 00:24:10.047 Could not set queue depth (nvme3n1) 00:24:10.047 Could not set queue depth (nvme4n1) 00:24:10.047 Could not set queue depth (nvme5n1) 00:24:10.047 Could not set queue depth (nvme6n1) 00:24:10.047 Could not set queue depth (nvme7n1) 00:24:10.047 Could not set queue depth (nvme8n1) 00:24:10.047 Could not set queue depth (nvme9n1) 00:24:10.047 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:10.047 fio-3.35 00:24:10.047 Starting 11 threads 00:24:20.046 00:24:20.046 job0: (groupid=0, jobs=1): err= 0: pid=1564430: Tue Jun 11 12:18:32 2024 00:24:20.046 write: IOPS=668, BW=167MiB/s (175MB/s)(1691MiB/10119msec); 0 zone resets 00:24:20.047 slat (usec): min=15, max=10328, avg=1405.23, stdev=2609.74 00:24:20.047 clat (msec): min=2, max=246, avg=94.32, stdev=26.47 00:24:20.047 lat (msec): min=3, max=246, avg=95.72, stdev=26.83 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 15], 5.00th=[ 56], 10.00th=[ 72], 20.00th=[ 78], 00:24:20.047 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 103], 00:24:20.047 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 136], 00:24:20.047 | 99.00th=[ 146], 99.50th=[ 180], 99.90th=[ 230], 99.95th=[ 239], 00:24:20.047 | 99.99th=[ 247] 00:24:20.047 bw ( KiB/s): min=118784, max=242688, per=9.11%, avg=171520.00, stdev=35356.69, samples=20 00:24:20.047 iops : min= 464, max= 948, avg=670.00, stdev=138.11, samples=20 00:24:20.047 lat (msec) : 4=0.03%, 10=0.53%, 20=1.23%, 50=2.37%, 100=52.12% 00:24:20.047 lat (msec) : 250=43.72% 00:24:20.047 cpu : usr=1.47%, sys=1.87%, ctx=2121, majf=0, minf=1 00:24:20.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.047 issued rwts: total=0,6763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.047 job1: (groupid=0, jobs=1): err= 0: pid=1564443: Tue Jun 11 12:18:32 2024 00:24:20.047 write: IOPS=636, BW=159MiB/s (167MB/s)(1604MiB/10081msec); 0 zone resets 00:24:20.047 slat (usec): min=16, max=40092, avg=1420.13, stdev=2722.68 00:24:20.047 clat (msec): min=23, max=167, avg=99.08, stdev=19.52 00:24:20.047 lat (msec): min=23, max=167, avg=100.50, stdev=19.78 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 36], 5.00th=[ 62], 10.00th=[ 77], 20.00th=[ 85], 00:24:20.047 | 30.00th=[ 90], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 107], 00:24:20.047 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 125], 00:24:20.047 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 161], 00:24:20.047 | 99.99th=[ 167] 00:24:20.047 bw ( KiB/s): min=137216, max=202240, per=8.64%, avg=162655.35, stdev=19729.69, samples=20 00:24:20.047 iops : min= 536, max= 790, avg=635.35, stdev=77.04, samples=20 00:24:20.047 lat (msec) : 50=3.48%, 100=37.20%, 250=59.32% 00:24:20.047 cpu : usr=1.47%, sys=1.69%, ctx=2194, majf=0, minf=1 00:24:20.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.047 issued rwts: total=0,6416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.047 job2: (groupid=0, jobs=1): err= 0: pid=1564444: Tue Jun 11 12:18:32 2024 00:24:20.047 write: IOPS=657, BW=164MiB/s (172MB/s)(1659MiB/10093msec); 0 zone resets 00:24:20.047 slat (usec): min=16, max=23745, avg=1486.28, stdev=2625.39 00:24:20.047 clat (msec): min=5, max=197, avg=95.83, stdev=18.45 00:24:20.047 lat (msec): min=6, max=197, avg=97.32, stdev=18.56 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 40], 5.00th=[ 75], 10.00th=[ 78], 20.00th=[ 80], 00:24:20.047 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 101], 60.00th=[ 106], 00:24:20.047 | 70.00th=[ 107], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 126], 00:24:20.047 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 186], 99.95th=[ 192], 00:24:20.047 | 99.99th=[ 199] 00:24:20.047 bw ( KiB/s): min=128512, max=202752, per=8.94%, avg=168217.60, stdev=23617.80, samples=20 00:24:20.047 iops : min= 502, max= 792, avg=657.10, stdev=92.26, samples=20 00:24:20.047 lat (msec) : 10=0.06%, 20=0.15%, 50=1.19%, 100=49.31%, 250=49.29% 00:24:20.047 cpu : usr=1.39%, sys=1.72%, ctx=1785, majf=0, minf=1 00:24:20.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.047 issued rwts: total=0,6634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.047 job3: (groupid=0, jobs=1): err= 0: pid=1564445: Tue Jun 11 12:18:32 2024 00:24:20.047 write: IOPS=628, BW=157MiB/s (165MB/s)(1584MiB/10080msec); 0 zone resets 00:24:20.047 slat (usec): min=23, max=37320, avg=1486.43, stdev=2741.69 00:24:20.047 clat (msec): min=16, max=166, avg=100.30, stdev=17.19 00:24:20.047 lat (msec): min=16, max=166, avg=101.79, stdev=17.35 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 44], 5.00th=[ 70], 10.00th=[ 78], 20.00th=[ 89], 00:24:20.047 | 30.00th=[ 96], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 108], 00:24:20.047 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 116], 95.00th=[ 126], 00:24:20.047 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 161], 00:24:20.047 | 99.99th=[ 167] 00:24:20.047 bw ( KiB/s): min=129024, max=199680, per=8.53%, avg=160588.80, stdev=17024.35, samples=20 00:24:20.047 iops : min= 504, max= 780, avg=627.30, stdev=66.50, samples=20 00:24:20.047 lat (msec) : 20=0.13%, 50=1.40%, 100=33.68%, 250=64.79% 00:24:20.047 cpu : usr=1.44%, sys=1.91%, ctx=1966, majf=0, minf=1 00:24:20.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.047 issued rwts: total=0,6336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.047 job4: (groupid=0, jobs=1): err= 0: pid=1564446: Tue Jun 11 12:18:32 2024 00:24:20.047 write: IOPS=735, BW=184MiB/s (193MB/s)(1860MiB/10118msec); 0 zone resets 00:24:20.047 slat (usec): min=17, max=43538, avg=1244.74, stdev=2610.23 00:24:20.047 clat (msec): min=4, max=250, avg=85.76, stdev=33.85 00:24:20.047 lat (msec): min=4, max=250, avg=87.01, stdev=34.30 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 50], 20.00th=[ 58], 00:24:20.047 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 100], 60.00th=[ 106], 00:24:20.047 | 70.00th=[ 109], 80.00th=[ 113], 90.00th=[ 128], 95.00th=[ 136], 00:24:20.047 | 99.00th=[ 144], 99.50th=[ 176], 99.90th=[ 234], 99.95th=[ 243], 00:24:20.047 | 99.99th=[ 251] 00:24:20.047 bw ( KiB/s): min=118784, max=287744, per=10.03%, avg=188800.00, stdev=56980.71, samples=20 00:24:20.047 iops : min= 464, max= 1124, avg=737.50, stdev=222.58, samples=20 00:24:20.047 lat (msec) : 10=0.46%, 20=2.26%, 50=7.62%, 100=40.64%, 250=48.99% 00:24:20.047 lat (msec) : 500=0.03% 00:24:20.047 cpu : usr=1.66%, sys=2.12%, ctx=2431, majf=0, minf=1 00:24:20.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.047 issued rwts: total=0,7438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.047 job5: (groupid=0, jobs=1): err= 0: pid=1564447: Tue Jun 11 12:18:32 2024 00:24:20.047 write: IOPS=657, BW=164MiB/s (172MB/s)(1664MiB/10119msec); 0 zone resets 00:24:20.047 slat (usec): min=22, max=29788, avg=1415.31, stdev=2767.58 00:24:20.047 clat (usec): min=1123, max=247360, avg=95857.37, stdev=34246.70 00:24:20.047 lat (usec): min=1561, max=247400, avg=97272.68, stdev=34739.92 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 41], 20.00th=[ 82], 00:24:20.047 | 30.00th=[ 94], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 109], 00:24:20.047 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 132], 95.00th=[ 134], 00:24:20.047 | 99.00th=[ 146], 99.50th=[ 180], 99.90th=[ 230], 99.95th=[ 239], 00:24:20.047 | 99.99th=[ 247] 00:24:20.047 bw ( KiB/s): min=118784, max=327168, per=8.96%, avg=168755.20, stdev=51407.45, samples=20 00:24:20.047 iops : min= 464, max= 1278, avg=659.20, stdev=200.81, samples=20 00:24:20.047 lat (msec) : 2=0.08%, 4=0.39%, 10=1.19%, 20=3.01%, 50=12.70% 00:24:20.047 lat (msec) : 100=15.75%, 250=66.90% 00:24:20.047 cpu : usr=1.52%, sys=1.99%, ctx=2312, majf=0, minf=1 00:24:20.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.047 issued rwts: total=0,6655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.047 job6: (groupid=0, jobs=1): err= 0: pid=1564448: Tue Jun 11 12:18:32 2024 00:24:20.047 write: IOPS=660, BW=165MiB/s (173MB/s)(1667MiB/10094msec); 0 zone resets 00:24:20.047 slat (usec): min=21, max=59949, avg=1455.49, stdev=2844.70 00:24:20.047 clat (msec): min=5, max=197, avg=95.25, stdev=21.03 00:24:20.047 lat (msec): min=5, max=197, avg=96.71, stdev=21.23 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 27], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 78], 00:24:20.047 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 105], 00:24:20.047 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 128], 00:24:20.047 | 99.00th=[ 134], 99.50th=[ 146], 99.90th=[ 186], 99.95th=[ 192], 00:24:20.047 | 99.99th=[ 199] 00:24:20.047 bw ( KiB/s): min=124928, max=215040, per=8.98%, avg=169113.60, stdev=25625.37, samples=20 00:24:20.047 iops : min= 488, max= 840, avg=660.60, stdev=100.10, samples=20 00:24:20.047 lat (msec) : 10=0.10%, 20=0.43%, 50=2.91%, 100=43.89%, 250=52.66% 00:24:20.047 cpu : usr=1.39%, sys=1.89%, ctx=1868, majf=0, minf=1 00:24:20.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:20.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.047 issued rwts: total=0,6669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.047 job7: (groupid=0, jobs=1): err= 0: pid=1564449: Tue Jun 11 12:18:32 2024 00:24:20.047 write: IOPS=595, BW=149MiB/s (156MB/s)(1502MiB/10095msec); 0 zone resets 00:24:20.047 slat (usec): min=23, max=209473, avg=1659.59, stdev=3984.48 00:24:20.047 clat (msec): min=22, max=331, avg=105.80, stdev=22.97 00:24:20.047 lat (msec): min=22, max=331, avg=107.46, stdev=22.98 00:24:20.047 clat percentiles (msec): 00:24:20.047 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 89], 00:24:20.047 | 30.00th=[ 100], 40.00th=[ 105], 50.00th=[ 107], 60.00th=[ 108], 00:24:20.047 | 70.00th=[ 111], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 130], 00:24:20.047 | 99.00th=[ 218], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 326], 00:24:20.048 | 99.99th=[ 334] 00:24:20.048 bw ( KiB/s): min=86528, max=190464, per=8.09%, avg=152217.60, stdev=21387.62, samples=20 00:24:20.048 iops : min= 338, max= 744, avg=594.60, stdev=83.55, samples=20 00:24:20.048 lat (msec) : 50=0.33%, 100=31.19%, 250=67.78%, 500=0.70% 00:24:20.048 cpu : usr=1.45%, sys=1.86%, ctx=1546, majf=0, minf=1 00:24:20.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:20.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.048 issued rwts: total=0,6009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.048 job8: (groupid=0, jobs=1): err= 0: pid=1564450: Tue Jun 11 12:18:32 2024 00:24:20.048 write: IOPS=684, BW=171MiB/s (180MB/s)(1726MiB/10082msec); 0 zone resets 00:24:20.048 slat (usec): min=23, max=23742, avg=1382.86, stdev=2535.34 00:24:20.048 clat (msec): min=6, max=166, avg=92.04, stdev=19.41 00:24:20.048 lat (msec): min=7, max=166, avg=93.43, stdev=19.63 00:24:20.048 clat percentiles (msec): 00:24:20.048 | 1.00th=[ 31], 5.00th=[ 71], 10.00th=[ 75], 20.00th=[ 80], 00:24:20.048 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 97], 00:24:20.048 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 125], 00:24:20.048 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 161], 00:24:20.048 | 99.99th=[ 167] 00:24:20.048 bw ( KiB/s): min=124928, max=241152, per=9.30%, avg=175151.00, stdev=30765.91, samples=20 00:24:20.048 iops : min= 488, max= 942, avg=684.15, stdev=120.13, samples=20 00:24:20.048 lat (msec) : 10=0.09%, 20=0.38%, 50=1.85%, 100=60.08%, 250=37.60% 00:24:20.048 cpu : usr=1.42%, sys=2.04%, ctx=2078, majf=0, minf=1 00:24:20.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:20.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.048 issued rwts: total=0,6904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.048 job9: (groupid=0, jobs=1): err= 0: pid=1564451: Tue Jun 11 12:18:32 2024 00:24:20.048 write: IOPS=615, BW=154MiB/s (161MB/s)(1554MiB/10096msec); 0 zone resets 00:24:20.048 slat (usec): min=22, max=29253, avg=1563.01, stdev=2764.98 00:24:20.048 clat (msec): min=17, max=194, avg=102.36, stdev=14.55 00:24:20.048 lat (msec): min=17, max=195, avg=103.93, stdev=14.53 00:24:20.048 clat percentiles (msec): 00:24:20.048 | 1.00th=[ 55], 5.00th=[ 82], 10.00th=[ 86], 20.00th=[ 89], 00:24:20.048 | 30.00th=[ 100], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 107], 00:24:20.048 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 115], 95.00th=[ 123], 00:24:20.048 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 182], 99.95th=[ 188], 00:24:20.048 | 99.99th=[ 194] 00:24:20.048 bw ( KiB/s): min=137216, max=192512, per=8.37%, avg=157505.20, stdev=15408.98, samples=20 00:24:20.048 iops : min= 536, max= 752, avg=615.25, stdev=60.20, samples=20 00:24:20.048 lat (msec) : 20=0.02%, 50=0.84%, 100=33.55%, 250=65.60% 00:24:20.048 cpu : usr=1.41%, sys=1.70%, ctx=1724, majf=0, minf=1 00:24:20.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:20.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.048 issued rwts: total=0,6215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.048 job10: (groupid=0, jobs=1): err= 0: pid=1564452: Tue Jun 11 12:18:32 2024 00:24:20.048 write: IOPS=827, BW=207MiB/s (217MB/s)(2094MiB/10120msec); 0 zone resets 00:24:20.048 slat (usec): min=22, max=13066, avg=1006.06, stdev=2121.62 00:24:20.048 clat (msec): min=6, max=246, avg=76.29, stdev=29.91 00:24:20.048 lat (msec): min=6, max=246, avg=77.30, stdev=30.34 00:24:20.048 clat percentiles (msec): 00:24:20.048 | 1.00th=[ 19], 5.00th=[ 39], 10.00th=[ 51], 20.00th=[ 55], 00:24:20.048 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 81], 00:24:20.048 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 114], 95.00th=[ 133], 00:24:20.048 | 99.00th=[ 138], 99.50th=[ 163], 99.90th=[ 230], 99.95th=[ 239], 00:24:20.048 | 99.99th=[ 247] 00:24:20.048 bw ( KiB/s): min=121344, max=314880, per=11.31%, avg=212812.80, stdev=67778.33, samples=20 00:24:20.048 iops : min= 474, max= 1230, avg=831.30, stdev=264.76, samples=20 00:24:20.048 lat (msec) : 10=0.04%, 20=1.19%, 50=7.74%, 100=61.83%, 250=29.20% 00:24:20.048 cpu : usr=1.73%, sys=2.77%, ctx=3324, majf=0, minf=1 00:24:20.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.048 issued rwts: total=0,8376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.048 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.048 00:24:20.048 Run status group 0 (all jobs): 00:24:20.048 WRITE: bw=1838MiB/s (1928MB/s), 149MiB/s-207MiB/s (156MB/s-217MB/s), io=18.2GiB (19.5GB), run=10080-10120msec 00:24:20.048 00:24:20.048 Disk stats (read/write): 00:24:20.048 nvme0n1: ios=49/13496, merge=0/0, ticks=39/1229716, in_queue=1229755, util=96.70% 00:24:20.048 nvme10n1: ios=44/12484, merge=0/0, ticks=1683/1201407, in_queue=1203090, util=99.82% 00:24:20.048 nvme1n1: ios=39/12959, merge=0/0, ticks=745/1198227, in_queue=1198972, util=100.00% 00:24:20.048 nvme2n1: ios=20/12326, merge=0/0, ticks=247/1201182, in_queue=1201429, util=97.41% 00:24:20.048 nvme3n1: ios=46/14848, merge=0/0, ticks=1625/1222044, in_queue=1223669, util=99.84% 00:24:20.048 nvme4n1: ios=0/13282, merge=0/0, ticks=0/1229603, in_queue=1229603, util=97.75% 00:24:20.048 nvme5n1: ios=43/13031, merge=0/0, ticks=1617/1189480, in_queue=1191097, util=99.88% 00:24:20.048 nvme6n1: ios=38/12016, merge=0/0, ticks=1119/1207747, in_queue=1208866, util=99.85% 00:24:20.048 nvme7n1: ios=0/13458, merge=0/0, ticks=0/1200687, in_queue=1200687, util=98.61% 00:24:20.048 nvme8n1: ios=0/12425, merge=0/0, ticks=0/1230120, in_queue=1230120, util=98.88% 00:24:20.048 nvme9n1: ios=0/16721, merge=0/0, ticks=0/1234863, in_queue=1234863, util=99.07% 00:24:20.048 12:18:32 -- target/multiconnection.sh@36 -- # sync 00:24:20.048 12:18:32 -- target/multiconnection.sh@37 -- # seq 1 11 00:24:20.048 12:18:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.048 12:18:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:20.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:20.048 12:18:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:20.048 12:18:32 -- common/autotest_common.sh@1198 -- # local i=0 00:24:20.048 12:18:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:20.048 12:18:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:24:20.048 12:18:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:20.048 12:18:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:24:20.048 12:18:32 -- common/autotest_common.sh@1210 -- # return 0 00:24:20.048 12:18:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.048 12:18:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.048 12:18:32 -- common/autotest_common.sh@10 -- # set +x 00:24:20.048 12:18:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.048 12:18:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.048 12:18:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:20.310 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:20.310 12:18:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:20.310 12:18:33 -- common/autotest_common.sh@1198 -- # local i=0 00:24:20.310 12:18:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:20.310 12:18:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:24:20.310 12:18:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:20.310 12:18:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:24:20.310 12:18:33 -- common/autotest_common.sh@1210 -- # return 0 00:24:20.310 12:18:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:20.310 12:18:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.310 12:18:33 -- common/autotest_common.sh@10 -- # set +x 00:24:20.310 12:18:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.310 12:18:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.310 12:18:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:20.570 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:20.570 12:18:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:20.570 12:18:33 -- common/autotest_common.sh@1198 -- # local i=0 00:24:20.570 12:18:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:20.570 12:18:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:24:20.570 12:18:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:20.570 12:18:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:24:20.570 12:18:33 -- common/autotest_common.sh@1210 -- # return 0 00:24:20.570 12:18:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:20.570 12:18:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.570 12:18:33 -- common/autotest_common.sh@10 -- # set +x 00:24:20.570 12:18:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.570 12:18:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.570 12:18:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:20.831 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:20.831 12:18:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:20.831 12:18:33 -- common/autotest_common.sh@1198 -- # local i=0 00:24:20.831 12:18:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:20.831 12:18:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:24:20.831 12:18:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:20.831 12:18:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:24:20.831 12:18:33 -- common/autotest_common.sh@1210 -- # return 0 00:24:20.831 12:18:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:20.831 12:18:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.831 12:18:33 -- common/autotest_common.sh@10 -- # set +x 00:24:20.831 12:18:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.831 12:18:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.831 12:18:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:20.831 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:20.831 12:18:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:20.831 12:18:33 -- common/autotest_common.sh@1198 -- # local i=0 00:24:20.831 12:18:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:20.831 12:18:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:24:20.831 12:18:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:20.831 12:18:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:24:21.091 12:18:33 -- common/autotest_common.sh@1210 -- # return 0 00:24:21.091 12:18:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:21.091 12:18:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.091 12:18:33 -- common/autotest_common.sh@10 -- # set +x 00:24:21.091 12:18:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.091 12:18:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.091 12:18:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:21.091 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:21.091 12:18:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:21.091 12:18:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.091 12:18:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:21.091 12:18:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:24:21.351 12:18:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:21.351 12:18:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:24:21.351 12:18:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:21.351 12:18:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:21.351 12:18:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.351 12:18:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.351 12:18:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.351 12:18:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.351 12:18:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:21.351 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:21.351 12:18:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:21.351 12:18:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.351 12:18:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:21.351 12:18:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:24:21.351 12:18:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:24:21.351 12:18:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:21.351 12:18:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:21.351 12:18:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:21.351 12:18:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.351 12:18:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.612 12:18:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.612 12:18:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.612 12:18:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:21.612 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:21.612 12:18:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:21.612 12:18:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.612 12:18:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:24:21.612 12:18:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:21.612 12:18:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:21.612 12:18:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:24:21.612 12:18:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:21.612 12:18:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:21.612 12:18:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.612 12:18:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.612 12:18:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.612 12:18:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.612 12:18:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:21.872 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:21.872 12:18:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:21.872 12:18:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.872 12:18:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:24:21.872 12:18:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:21.872 12:18:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:24:21.872 12:18:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:21.872 12:18:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:21.872 12:18:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:21.872 12:18:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.872 12:18:34 -- common/autotest_common.sh@10 -- # set +x 00:24:21.872 12:18:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.872 12:18:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.872 12:18:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:21.872 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:21.872 12:18:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:21.873 12:18:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.873 12:18:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:21.873 12:18:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:24:21.873 12:18:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:21.873 12:18:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:24:21.873 12:18:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:21.873 12:18:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:21.873 12:18:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.873 12:18:34 -- common/autotest_common.sh@10 -- # set +x 00:24:22.133 12:18:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.133 12:18:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.134 12:18:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:22.134 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:22.134 12:18:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:22.134 12:18:35 -- common/autotest_common.sh@1198 -- # local i=0 00:24:22.134 12:18:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:22.134 12:18:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:24:22.134 12:18:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:22.134 12:18:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:24:22.134 12:18:35 -- common/autotest_common.sh@1210 -- # return 0 00:24:22.134 12:18:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:22.134 12:18:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.134 12:18:35 -- common/autotest_common.sh@10 -- # set +x 00:24:22.134 12:18:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.134 12:18:35 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:22.134 12:18:35 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:22.134 12:18:35 -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:22.134 12:18:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:22.134 12:18:35 -- nvmf/common.sh@116 -- # sync 00:24:22.134 12:18:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:22.134 12:18:35 -- nvmf/common.sh@119 -- # set +e 00:24:22.134 12:18:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:22.134 12:18:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:22.134 rmmod nvme_tcp 00:24:22.134 rmmod nvme_fabrics 00:24:22.134 rmmod nvme_keyring 00:24:22.395 12:18:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:22.395 12:18:35 -- nvmf/common.sh@123 -- # set -e 00:24:22.395 12:18:35 -- nvmf/common.sh@124 -- # return 0 00:24:22.395 12:18:35 -- nvmf/common.sh@477 -- # '[' -n 1552889 ']' 00:24:22.395 12:18:35 -- nvmf/common.sh@478 -- # killprocess 1552889 00:24:22.395 12:18:35 -- common/autotest_common.sh@926 -- # '[' -z 1552889 ']' 00:24:22.395 12:18:35 -- common/autotest_common.sh@930 -- # kill -0 1552889 00:24:22.395 12:18:35 -- common/autotest_common.sh@931 -- # uname 00:24:22.395 12:18:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:22.395 12:18:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1552889 00:24:22.395 12:18:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:22.395 12:18:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:22.395 12:18:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1552889' 00:24:22.395 killing process with pid 1552889 00:24:22.395 12:18:35 -- common/autotest_common.sh@945 -- # kill 1552889 00:24:22.395 12:18:35 -- common/autotest_common.sh@950 -- # wait 1552889 00:24:22.656 12:18:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:22.656 12:18:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:22.656 12:18:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:22.656 12:18:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.656 12:18:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:22.656 12:18:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.656 12:18:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.656 12:18:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.569 12:18:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:24.569 00:24:24.569 real 1m16.955s 00:24:24.569 user 4m47.022s 00:24:24.569 sys 0m22.107s 00:24:24.569 12:18:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.569 12:18:37 -- common/autotest_common.sh@10 -- # set +x 00:24:24.569 ************************************ 00:24:24.569 END TEST nvmf_multiconnection 00:24:24.569 ************************************ 00:24:24.831 12:18:37 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:24.831 12:18:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:24.831 12:18:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:24.831 12:18:37 -- common/autotest_common.sh@10 -- # set +x 00:24:24.831 ************************************ 00:24:24.831 START TEST nvmf_initiator_timeout 00:24:24.831 ************************************ 00:24:24.831 12:18:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:24.831 * Looking for test storage... 00:24:24.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:24.831 12:18:37 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.831 12:18:37 -- nvmf/common.sh@7 -- # uname -s 00:24:24.831 12:18:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.831 12:18:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.831 12:18:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.831 12:18:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.831 12:18:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.831 12:18:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.831 12:18:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.831 12:18:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.831 12:18:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.831 12:18:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.831 12:18:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:24.831 12:18:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:24.831 12:18:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.831 12:18:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.831 12:18:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.831 12:18:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.831 12:18:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.831 12:18:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.831 12:18:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.831 12:18:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.831 12:18:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.831 12:18:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.831 12:18:37 -- paths/export.sh@5 -- # export PATH 00:24:24.831 12:18:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.831 12:18:37 -- nvmf/common.sh@46 -- # : 0 00:24:24.831 12:18:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:24.831 12:18:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:24.831 12:18:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:24.831 12:18:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.831 12:18:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.831 12:18:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:24.831 12:18:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:24.832 12:18:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:24.832 12:18:37 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:24.832 12:18:37 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:24.832 12:18:37 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:24.832 12:18:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:24.832 12:18:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.832 12:18:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:24.832 12:18:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:24.832 12:18:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:24.832 12:18:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.832 12:18:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.832 12:18:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.832 12:18:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:24.832 12:18:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:24.832 12:18:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:24.832 12:18:37 -- common/autotest_common.sh@10 -- # set +x 00:24:32.971 12:18:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:32.971 12:18:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:32.971 12:18:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:32.971 12:18:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:32.971 12:18:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:32.971 12:18:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:32.971 12:18:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:32.971 12:18:44 -- nvmf/common.sh@294 -- # net_devs=() 00:24:32.971 12:18:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:32.971 12:18:44 -- nvmf/common.sh@295 -- # e810=() 00:24:32.971 12:18:44 -- nvmf/common.sh@295 -- # local -ga e810 00:24:32.971 12:18:44 -- nvmf/common.sh@296 -- # x722=() 00:24:32.971 12:18:44 -- nvmf/common.sh@296 -- # local -ga x722 00:24:32.971 12:18:44 -- nvmf/common.sh@297 -- # mlx=() 00:24:32.971 12:18:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:32.971 12:18:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.971 12:18:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:32.971 12:18:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:32.971 12:18:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:32.971 12:18:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:32.971 12:18:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:32.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:32.971 12:18:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:32.971 12:18:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:32.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:32.971 12:18:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:32.971 12:18:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:32.971 12:18:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:32.971 12:18:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.971 12:18:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:32.971 12:18:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.971 12:18:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:32.971 Found net devices under 0000:31:00.0: cvl_0_0 00:24:32.971 12:18:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.971 12:18:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:32.971 12:18:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.971 12:18:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:32.971 12:18:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.971 12:18:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:32.971 Found net devices under 0000:31:00.1: cvl_0_1 00:24:32.972 12:18:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.972 12:18:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:32.972 12:18:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:32.972 12:18:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:32.972 12:18:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:32.972 12:18:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:32.972 12:18:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.972 12:18:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.972 12:18:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.972 12:18:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:32.972 12:18:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.972 12:18:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.972 12:18:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:32.972 12:18:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.972 12:18:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.972 12:18:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:32.972 12:18:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:32.972 12:18:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.972 12:18:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.972 12:18:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.972 12:18:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.972 12:18:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:32.972 12:18:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.972 12:18:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.972 12:18:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.972 12:18:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:32.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:24:32.972 00:24:32.972 --- 10.0.0.2 ping statistics --- 00:24:32.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.972 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:24:32.972 12:18:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:24:32.972 00:24:32.972 --- 10.0.0.1 ping statistics --- 00:24:32.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.972 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:24:32.972 12:18:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.972 12:18:45 -- nvmf/common.sh@410 -- # return 0 00:24:32.972 12:18:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:32.972 12:18:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.972 12:18:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:32.972 12:18:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:32.972 12:18:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.972 12:18:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:32.972 12:18:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:32.972 12:18:45 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:32.972 12:18:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:32.972 12:18:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 12:18:45 -- nvmf/common.sh@469 -- # nvmfpid=1571167 00:24:32.972 12:18:45 -- nvmf/common.sh@470 -- # waitforlisten 1571167 00:24:32.972 12:18:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.972 12:18:45 -- common/autotest_common.sh@819 -- # '[' -z 1571167 ']' 00:24:32.972 12:18:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.972 12:18:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:32.972 12:18:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.972 12:18:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 [2024-06-11 12:18:45.120168] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:32.972 [2024-06-11 12:18:45.120232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.972 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.972 [2024-06-11 12:18:45.191794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.972 [2024-06-11 12:18:45.229284] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:32.972 [2024-06-11 12:18:45.229430] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.972 [2024-06-11 12:18:45.229441] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.972 [2024-06-11 12:18:45.229450] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.972 [2024-06-11 12:18:45.229599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.972 [2024-06-11 12:18:45.229722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.972 [2024-06-11 12:18:45.229883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.972 [2024-06-11 12:18:45.229884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.972 12:18:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:32.972 12:18:45 -- common/autotest_common.sh@852 -- # return 0 00:24:32.972 12:18:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:32.972 12:18:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 12:18:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.972 12:18:45 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:32.972 12:18:45 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.972 12:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 Malloc0 00:24:32.972 12:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.972 12:18:45 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:32.972 12:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 Delay0 00:24:32.972 12:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.972 12:18:45 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.972 12:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 [2024-06-11 12:18:45.973399] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.972 12:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.972 12:18:45 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:32.972 12:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:32.972 12:18:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.972 12:18:45 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:32.972 12:18:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.972 12:18:45 -- common/autotest_common.sh@10 -- # set +x 00:24:33.232 12:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:33.232 12:18:46 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.232 12:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:33.233 12:18:46 -- common/autotest_common.sh@10 -- # set +x 00:24:33.233 [2024-06-11 12:18:46.013670] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.233 12:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:33.233 12:18:46 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:34.619 12:18:47 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:34.619 12:18:47 -- common/autotest_common.sh@1177 -- # local i=0 00:24:34.619 12:18:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.619 12:18:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:34.619 12:18:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:36.581 12:18:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:36.581 12:18:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:36.581 12:18:49 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:36.581 12:18:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:36.581 12:18:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.581 12:18:49 -- common/autotest_common.sh@1187 -- # return 0 00:24:36.581 12:18:49 -- target/initiator_timeout.sh@35 -- # fio_pid=1572028 00:24:36.581 12:18:49 -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:36.581 12:18:49 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:36.581 [global] 00:24:36.581 thread=1 00:24:36.581 invalidate=1 00:24:36.581 rw=write 00:24:36.582 time_based=1 00:24:36.582 runtime=60 00:24:36.582 ioengine=libaio 00:24:36.582 direct=1 00:24:36.582 bs=4096 00:24:36.582 iodepth=1 00:24:36.582 norandommap=0 00:24:36.582 numjobs=1 00:24:36.582 00:24:36.582 verify_dump=1 00:24:36.582 verify_backlog=512 00:24:36.582 verify_state_save=0 00:24:36.582 do_verify=1 00:24:36.582 verify=crc32c-intel 00:24:36.582 [job0] 00:24:36.582 filename=/dev/nvme0n1 00:24:36.582 Could not set queue depth (nvme0n1) 00:24:36.842 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:36.842 fio-3.35 00:24:36.842 Starting 1 thread 00:24:40.143 12:18:52 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:40.143 12:18:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.143 12:18:52 -- common/autotest_common.sh@10 -- # set +x 00:24:40.143 true 00:24:40.143 12:18:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.143 12:18:52 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:40.143 12:18:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.143 12:18:52 -- common/autotest_common.sh@10 -- # set +x 00:24:40.143 true 00:24:40.143 12:18:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.143 12:18:52 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:40.143 12:18:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.143 12:18:52 -- common/autotest_common.sh@10 -- # set +x 00:24:40.143 true 00:24:40.143 12:18:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.143 12:18:52 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:40.143 12:18:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.143 12:18:52 -- common/autotest_common.sh@10 -- # set +x 00:24:40.143 true 00:24:40.143 12:18:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.143 12:18:52 -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:42.688 12:18:55 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:42.688 12:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.688 12:18:55 -- common/autotest_common.sh@10 -- # set +x 00:24:42.688 true 00:24:42.689 12:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.689 12:18:55 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:42.689 12:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.689 12:18:55 -- common/autotest_common.sh@10 -- # set +x 00:24:42.689 true 00:24:42.689 12:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.689 12:18:55 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:42.689 12:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.689 12:18:55 -- common/autotest_common.sh@10 -- # set +x 00:24:42.689 true 00:24:42.689 12:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.689 12:18:55 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:42.689 12:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.689 12:18:55 -- common/autotest_common.sh@10 -- # set +x 00:24:42.689 true 00:24:42.689 12:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.689 12:18:55 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:42.689 12:18:55 -- target/initiator_timeout.sh@54 -- # wait 1572028 00:25:38.961 00:25:38.961 job0: (groupid=0, jobs=1): err= 0: pid=1572343: Tue Jun 11 12:19:49 2024 00:25:38.961 read: IOPS=162, BW=652KiB/s (667kB/s)(38.2MiB/60001msec) 00:25:38.961 slat (usec): min=6, max=7683, avg=26.16, stdev=104.43 00:25:38.962 clat (usec): min=317, max=41887k, avg=5568.84, stdev=423688.58 00:25:38.962 lat (usec): min=324, max=41887k, avg=5595.01, stdev=423688.59 00:25:38.962 clat percentiles (usec): 00:25:38.962 | 1.00th=[ 578], 5.00th=[ 676], 10.00th=[ 742], 00:25:38.962 | 20.00th=[ 783], 30.00th=[ 840], 40.00th=[ 865], 00:25:38.962 | 50.00th=[ 881], 60.00th=[ 922], 70.00th=[ 1004], 00:25:38.962 | 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1123], 00:25:38.962 | 99.00th=[ 1237], 99.50th=[ 42206], 99.90th=[ 42730], 00:25:38.962 | 99.95th=[ 43254], 99.99th=[17112761] 00:25:38.962 write: IOPS=170, BW=683KiB/s (699kB/s)(40.0MiB/60001msec); 0 zone resets 00:25:38.962 slat (usec): min=9, max=29820, avg=32.20, stdev=294.53 00:25:38.962 clat (usec): min=155, max=3957, avg=471.12, stdev=153.46 00:25:38.962 lat (usec): min=165, max=31903, avg=503.32, stdev=346.62 00:25:38.962 clat percentiles (usec): 00:25:38.962 | 1.00th=[ 202], 5.00th=[ 260], 10.00th=[ 302], 20.00th=[ 322], 00:25:38.962 | 30.00th=[ 367], 40.00th=[ 412], 50.00th=[ 453], 60.00th=[ 523], 00:25:38.962 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 676], 95.00th=[ 701], 00:25:38.962 | 99.00th=[ 750], 99.50th=[ 775], 99.90th=[ 832], 99.95th=[ 881], 00:25:38.962 | 99.99th=[ 3654] 00:25:38.962 bw ( KiB/s): min= 24, max= 4096, per=100.00%, avg=2779.43, stdev=1312.20, samples=28 00:25:38.962 iops : min= 6, max= 1024, avg=694.86, stdev=328.05, samples=28 00:25:38.962 lat (usec) : 250=2.36%, 500=26.46%, 750=27.29%, 1000=28.89% 00:25:38.962 lat (msec) : 2=14.51%, 4=0.03%, 10=0.01%, 50=0.45%, >=2000=0.01% 00:25:38.962 cpu : usr=0.53%, sys=0.93%, ctx=20022, majf=0, minf=1 00:25:38.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.962 issued rwts: total=9774,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:38.962 00:25:38.962 Run status group 0 (all jobs): 00:25:38.962 READ: bw=652KiB/s (667kB/s), 652KiB/s-652KiB/s (667kB/s-667kB/s), io=38.2MiB (40.0MB), run=60001-60001msec 00:25:38.962 WRITE: bw=683KiB/s (699kB/s), 683KiB/s-683KiB/s (699kB/s-699kB/s), io=40.0MiB (41.9MB), run=60001-60001msec 00:25:38.962 00:25:38.962 Disk stats (read/write): 00:25:38.962 nvme0n1: ios=9780/10089, merge=0/0, ticks=13497/4625, in_queue=18122, util=99.95% 00:25:38.962 12:19:49 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:38.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:38.962 12:19:50 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:38.962 12:19:50 -- common/autotest_common.sh@1198 -- # local i=0 00:25:38.962 12:19:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:38.962 12:19:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:38.962 12:19:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:38.962 12:19:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:38.962 12:19:50 -- common/autotest_common.sh@1210 -- # return 0 00:25:38.962 12:19:50 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:38.962 12:19:50 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:38.962 nvmf hotplug test: fio successful as expected 00:25:38.962 12:19:50 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.962 12:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.962 12:19:50 -- common/autotest_common.sh@10 -- # set +x 00:25:38.962 12:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.962 12:19:50 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:38.962 12:19:50 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:38.962 12:19:50 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:38.962 12:19:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:38.962 12:19:50 -- nvmf/common.sh@116 -- # sync 00:25:38.962 12:19:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:38.962 12:19:50 -- nvmf/common.sh@119 -- # set +e 00:25:38.962 12:19:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:38.962 12:19:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:38.962 rmmod nvme_tcp 00:25:38.962 rmmod nvme_fabrics 00:25:38.962 rmmod nvme_keyring 00:25:38.962 12:19:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:38.962 12:19:50 -- nvmf/common.sh@123 -- # set -e 00:25:38.962 12:19:50 -- nvmf/common.sh@124 -- # return 0 00:25:38.962 12:19:50 -- nvmf/common.sh@477 -- # '[' -n 1571167 ']' 00:25:38.962 12:19:50 -- nvmf/common.sh@478 -- # killprocess 1571167 00:25:38.962 12:19:50 -- common/autotest_common.sh@926 -- # '[' -z 1571167 ']' 00:25:38.962 12:19:50 -- common/autotest_common.sh@930 -- # kill -0 1571167 00:25:38.962 12:19:50 -- common/autotest_common.sh@931 -- # uname 00:25:38.962 12:19:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:38.962 12:19:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1571167 00:25:38.962 12:19:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:38.962 12:19:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:38.962 12:19:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1571167' 00:25:38.962 killing process with pid 1571167 00:25:38.962 12:19:50 -- common/autotest_common.sh@945 -- # kill 1571167 00:25:38.962 12:19:50 -- common/autotest_common.sh@950 -- # wait 1571167 00:25:38.962 12:19:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:38.962 12:19:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:38.962 12:19:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:38.962 12:19:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.962 12:19:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:38.962 12:19:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.962 12:19:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.962 12:19:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.532 12:19:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:39.532 00:25:39.532 real 1m14.811s 00:25:39.532 user 4m36.732s 00:25:39.532 sys 0m7.732s 00:25:39.532 12:19:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.532 12:19:52 -- common/autotest_common.sh@10 -- # set +x 00:25:39.532 ************************************ 00:25:39.532 END TEST nvmf_initiator_timeout 00:25:39.532 ************************************ 00:25:39.532 12:19:52 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:25:39.532 12:19:52 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:25:39.532 12:19:52 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:25:39.532 12:19:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:39.532 12:19:52 -- common/autotest_common.sh@10 -- # set +x 00:25:47.669 12:19:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:47.669 12:19:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:47.669 12:19:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:47.669 12:19:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:47.669 12:19:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:47.669 12:19:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:47.669 12:19:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:47.669 12:19:59 -- nvmf/common.sh@294 -- # net_devs=() 00:25:47.669 12:19:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:47.669 12:19:59 -- nvmf/common.sh@295 -- # e810=() 00:25:47.669 12:19:59 -- nvmf/common.sh@295 -- # local -ga e810 00:25:47.669 12:19:59 -- nvmf/common.sh@296 -- # x722=() 00:25:47.669 12:19:59 -- nvmf/common.sh@296 -- # local -ga x722 00:25:47.669 12:19:59 -- nvmf/common.sh@297 -- # mlx=() 00:25:47.669 12:19:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:47.669 12:19:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.669 12:19:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:47.669 12:19:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:47.669 12:19:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:47.669 12:19:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:47.669 12:19:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:47.669 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:47.669 12:19:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:47.669 12:19:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:47.669 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:47.669 12:19:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:47.669 12:19:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:47.669 12:19:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.669 12:19:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:47.669 12:19:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.669 12:19:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:47.669 Found net devices under 0000:31:00.0: cvl_0_0 00:25:47.669 12:19:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.669 12:19:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:47.669 12:19:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.669 12:19:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:47.669 12:19:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.669 12:19:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:47.669 Found net devices under 0000:31:00.1: cvl_0_1 00:25:47.669 12:19:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.669 12:19:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:47.669 12:19:59 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.669 12:19:59 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:25:47.669 12:19:59 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:47.669 12:19:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:47.669 12:19:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:47.669 12:19:59 -- common/autotest_common.sh@10 -- # set +x 00:25:47.669 ************************************ 00:25:47.669 START TEST nvmf_perf_adq 00:25:47.669 ************************************ 00:25:47.669 12:19:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:47.669 * Looking for test storage... 00:25:47.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:47.669 12:19:59 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.669 12:19:59 -- nvmf/common.sh@7 -- # uname -s 00:25:47.669 12:19:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.669 12:19:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.669 12:19:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.669 12:19:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.669 12:19:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.669 12:19:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.669 12:19:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.669 12:19:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.669 12:19:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.669 12:19:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.669 12:19:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:47.669 12:19:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:47.669 12:19:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.669 12:19:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.669 12:19:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.669 12:19:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.669 12:19:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.669 12:19:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.669 12:19:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.669 12:19:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.669 12:19:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.669 12:19:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.669 12:19:59 -- paths/export.sh@5 -- # export PATH 00:25:47.669 12:19:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.669 12:19:59 -- nvmf/common.sh@46 -- # : 0 00:25:47.669 12:19:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:47.669 12:19:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:47.669 12:19:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:47.670 12:19:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.670 12:19:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.670 12:19:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:47.670 12:19:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:47.670 12:19:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:47.670 12:19:59 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:47.670 12:19:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:47.670 12:19:59 -- common/autotest_common.sh@10 -- # set +x 00:25:54.253 12:20:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:54.253 12:20:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:54.253 12:20:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:54.253 12:20:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:54.253 12:20:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:54.253 12:20:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:54.253 12:20:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:54.253 12:20:06 -- nvmf/common.sh@294 -- # net_devs=() 00:25:54.253 12:20:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:54.253 12:20:06 -- nvmf/common.sh@295 -- # e810=() 00:25:54.253 12:20:06 -- nvmf/common.sh@295 -- # local -ga e810 00:25:54.253 12:20:06 -- nvmf/common.sh@296 -- # x722=() 00:25:54.253 12:20:06 -- nvmf/common.sh@296 -- # local -ga x722 00:25:54.253 12:20:06 -- nvmf/common.sh@297 -- # mlx=() 00:25:54.253 12:20:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:54.253 12:20:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.253 12:20:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:54.253 12:20:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:54.253 12:20:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:54.253 12:20:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:54.253 12:20:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:54.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:54.253 12:20:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:54.253 12:20:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:54.253 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:54.253 12:20:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:54.253 12:20:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:54.253 12:20:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:54.253 12:20:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.253 12:20:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:54.253 12:20:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.253 12:20:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:54.253 Found net devices under 0000:31:00.0: cvl_0_0 00:25:54.253 12:20:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.253 12:20:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:54.253 12:20:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.253 12:20:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:54.253 12:20:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.253 12:20:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:54.253 Found net devices under 0000:31:00.1: cvl_0_1 00:25:54.253 12:20:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.253 12:20:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:54.253 12:20:06 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.253 12:20:06 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:54.253 12:20:06 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:54.253 12:20:06 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:25:54.253 12:20:06 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:55.195 12:20:07 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:57.109 12:20:09 -- target/perf_adq.sh@54 -- # sleep 5 00:26:02.401 12:20:14 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:02.401 12:20:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:02.401 12:20:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.401 12:20:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:02.401 12:20:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:02.402 12:20:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:02.402 12:20:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.402 12:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.402 12:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.402 12:20:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:02.402 12:20:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:02.402 12:20:14 -- common/autotest_common.sh@10 -- # set +x 00:26:02.402 12:20:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:02.402 12:20:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:02.402 12:20:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:02.402 12:20:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:02.402 12:20:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:02.402 12:20:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:02.402 12:20:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:02.402 12:20:14 -- nvmf/common.sh@294 -- # net_devs=() 00:26:02.402 12:20:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:02.402 12:20:14 -- nvmf/common.sh@295 -- # e810=() 00:26:02.402 12:20:14 -- nvmf/common.sh@295 -- # local -ga e810 00:26:02.402 12:20:14 -- nvmf/common.sh@296 -- # x722=() 00:26:02.402 12:20:14 -- nvmf/common.sh@296 -- # local -ga x722 00:26:02.402 12:20:14 -- nvmf/common.sh@297 -- # mlx=() 00:26:02.402 12:20:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:02.402 12:20:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.402 12:20:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:02.402 12:20:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:02.402 12:20:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:02.402 12:20:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:02.402 12:20:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:02.402 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:02.402 12:20:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:02.402 12:20:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:02.402 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:02.402 12:20:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:02.402 12:20:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:02.402 12:20:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.402 12:20:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:02.402 12:20:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.402 12:20:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:02.402 Found net devices under 0000:31:00.0: cvl_0_0 00:26:02.402 12:20:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.402 12:20:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:02.402 12:20:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.402 12:20:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:02.402 12:20:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.402 12:20:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:02.402 Found net devices under 0000:31:00.1: cvl_0_1 00:26:02.402 12:20:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.402 12:20:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:02.402 12:20:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:02.402 12:20:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:02.402 12:20:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:02.402 12:20:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.402 12:20:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.402 12:20:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.402 12:20:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:02.402 12:20:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.402 12:20:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.402 12:20:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:02.402 12:20:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.402 12:20:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.402 12:20:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:02.402 12:20:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:02.402 12:20:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.402 12:20:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.402 12:20:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.402 12:20:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.402 12:20:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:02.402 12:20:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.402 12:20:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.402 12:20:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.402 12:20:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:02.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:02.402 00:26:02.402 --- 10.0.0.2 ping statistics --- 00:26:02.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.402 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:02.402 12:20:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:26:02.402 00:26:02.402 --- 10.0.0.1 ping statistics --- 00:26:02.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.402 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:26:02.402 12:20:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.402 12:20:15 -- nvmf/common.sh@410 -- # return 0 00:26:02.402 12:20:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:02.402 12:20:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.402 12:20:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:02.402 12:20:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:02.402 12:20:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.402 12:20:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:02.402 12:20:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:02.402 12:20:15 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:02.402 12:20:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:02.402 12:20:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:02.402 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:26:02.402 12:20:15 -- nvmf/common.sh@469 -- # nvmfpid=1593679 00:26:02.402 12:20:15 -- nvmf/common.sh@470 -- # waitforlisten 1593679 00:26:02.402 12:20:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:02.402 12:20:15 -- common/autotest_common.sh@819 -- # '[' -z 1593679 ']' 00:26:02.402 12:20:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.402 12:20:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:02.402 12:20:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.402 12:20:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:02.402 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:26:02.402 [2024-06-11 12:20:15.135706] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:02.402 [2024-06-11 12:20:15.135760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.402 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.402 [2024-06-11 12:20:15.202281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:02.402 [2024-06-11 12:20:15.233124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:02.402 [2024-06-11 12:20:15.233258] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.402 [2024-06-11 12:20:15.233269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.402 [2024-06-11 12:20:15.233277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.402 [2024-06-11 12:20:15.233414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.402 [2024-06-11 12:20:15.233530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.402 [2024-06-11 12:20:15.233687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.402 [2024-06-11 12:20:15.233688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:02.974 12:20:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:02.975 12:20:15 -- common/autotest_common.sh@852 -- # return 0 00:26:02.975 12:20:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:02.975 12:20:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:02.975 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:26:02.975 12:20:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.975 12:20:15 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:02.975 12:20:15 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:02.975 12:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:02.975 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:26:02.975 12:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:02.975 12:20:15 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:02.975 12:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:02.975 12:20:15 -- common/autotest_common.sh@10 -- # set +x 00:26:03.290 12:20:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.290 12:20:16 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:03.290 12:20:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.290 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:26:03.290 [2024-06-11 12:20:16.029929] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.290 12:20:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.290 12:20:16 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.290 12:20:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.290 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:26:03.290 Malloc1 00:26:03.290 12:20:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.290 12:20:16 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.290 12:20:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.290 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:26:03.290 12:20:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.290 12:20:16 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.290 12:20:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.290 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:26:03.290 12:20:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.290 12:20:16 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.290 12:20:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.290 12:20:16 -- common/autotest_common.sh@10 -- # set +x 00:26:03.290 [2024-06-11 12:20:16.089226] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.290 12:20:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.290 12:20:16 -- target/perf_adq.sh@73 -- # perfpid=1593783 00:26:03.290 12:20:16 -- target/perf_adq.sh@74 -- # sleep 2 00:26:03.290 12:20:16 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:03.290 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.205 12:20:18 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:05.205 12:20:18 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:05.205 12:20:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.205 12:20:18 -- target/perf_adq.sh@76 -- # wc -l 00:26:05.205 12:20:18 -- common/autotest_common.sh@10 -- # set +x 00:26:05.205 12:20:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.205 12:20:18 -- target/perf_adq.sh@76 -- # count=4 00:26:05.205 12:20:18 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:05.205 12:20:18 -- target/perf_adq.sh@81 -- # wait 1593783 00:26:13.341 Initializing NVMe Controllers 00:26:13.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:13.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:13.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:13.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:13.341 Initialization complete. Launching workers. 00:26:13.341 ======================================================== 00:26:13.341 Latency(us) 00:26:13.341 Device Information : IOPS MiB/s Average min max 00:26:13.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12381.64 48.37 5169.19 1309.92 9232.47 00:26:13.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15443.33 60.33 4143.59 1077.93 8344.87 00:26:13.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14843.13 57.98 4311.17 1052.49 12040.98 00:26:13.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14560.63 56.88 4394.86 939.29 12037.21 00:26:13.341 ======================================================== 00:26:13.341 Total : 57228.74 223.55 4472.88 939.29 12040.98 00:26:13.341 00:26:13.341 12:20:26 -- target/perf_adq.sh@82 -- # nvmftestfini 00:26:13.341 12:20:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:13.341 12:20:26 -- nvmf/common.sh@116 -- # sync 00:26:13.341 12:20:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:13.341 12:20:26 -- nvmf/common.sh@119 -- # set +e 00:26:13.341 12:20:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:13.341 12:20:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:13.341 rmmod nvme_tcp 00:26:13.341 rmmod nvme_fabrics 00:26:13.341 rmmod nvme_keyring 00:26:13.341 12:20:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:13.341 12:20:26 -- nvmf/common.sh@123 -- # set -e 00:26:13.341 12:20:26 -- nvmf/common.sh@124 -- # return 0 00:26:13.341 12:20:26 -- nvmf/common.sh@477 -- # '[' -n 1593679 ']' 00:26:13.341 12:20:26 -- nvmf/common.sh@478 -- # killprocess 1593679 00:26:13.341 12:20:26 -- common/autotest_common.sh@926 -- # '[' -z 1593679 ']' 00:26:13.341 12:20:26 -- common/autotest_common.sh@930 -- # kill -0 1593679 00:26:13.341 12:20:26 -- common/autotest_common.sh@931 -- # uname 00:26:13.341 12:20:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.341 12:20:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1593679 00:26:13.341 12:20:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:13.341 12:20:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:13.341 12:20:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1593679' 00:26:13.341 killing process with pid 1593679 00:26:13.341 12:20:26 -- common/autotest_common.sh@945 -- # kill 1593679 00:26:13.341 12:20:26 -- common/autotest_common.sh@950 -- # wait 1593679 00:26:13.603 12:20:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:13.603 12:20:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:13.603 12:20:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:13.603 12:20:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.603 12:20:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:13.603 12:20:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.603 12:20:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.603 12:20:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.517 12:20:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:15.517 12:20:28 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:26:15.517 12:20:28 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:17.430 12:20:30 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:18.812 12:20:31 -- target/perf_adq.sh@54 -- # sleep 5 00:26:24.100 12:20:36 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:24.100 12:20:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:24.100 12:20:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.100 12:20:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:24.100 12:20:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:24.100 12:20:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:24.100 12:20:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.100 12:20:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.100 12:20:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.100 12:20:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:24.100 12:20:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:24.100 12:20:36 -- common/autotest_common.sh@10 -- # set +x 00:26:24.100 12:20:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:24.100 12:20:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:24.100 12:20:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:24.100 12:20:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:24.100 12:20:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:24.100 12:20:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:24.100 12:20:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:24.100 12:20:36 -- nvmf/common.sh@294 -- # net_devs=() 00:26:24.100 12:20:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:24.100 12:20:36 -- nvmf/common.sh@295 -- # e810=() 00:26:24.100 12:20:36 -- nvmf/common.sh@295 -- # local -ga e810 00:26:24.100 12:20:36 -- nvmf/common.sh@296 -- # x722=() 00:26:24.100 12:20:36 -- nvmf/common.sh@296 -- # local -ga x722 00:26:24.100 12:20:36 -- nvmf/common.sh@297 -- # mlx=() 00:26:24.100 12:20:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:24.100 12:20:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.100 12:20:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:24.100 12:20:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:24.100 12:20:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:24.100 12:20:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:24.100 12:20:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:24.100 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:24.100 12:20:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:24.100 12:20:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:24.100 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:24.100 12:20:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:24.100 12:20:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:24.100 12:20:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:24.101 12:20:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.101 12:20:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:24.101 12:20:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.101 12:20:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:24.101 Found net devices under 0000:31:00.0: cvl_0_0 00:26:24.101 12:20:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.101 12:20:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:24.101 12:20:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.101 12:20:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:24.101 12:20:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.101 12:20:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:24.101 Found net devices under 0000:31:00.1: cvl_0_1 00:26:24.101 12:20:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.101 12:20:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:24.101 12:20:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:24.101 12:20:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:24.101 12:20:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:24.101 12:20:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:24.101 12:20:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.101 12:20:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.101 12:20:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.101 12:20:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:24.101 12:20:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.101 12:20:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.101 12:20:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:24.101 12:20:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.101 12:20:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.101 12:20:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:24.101 12:20:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:24.101 12:20:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.101 12:20:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.101 12:20:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.101 12:20:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.101 12:20:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:24.101 12:20:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.101 12:20:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.101 12:20:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.362 12:20:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:24.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:26:24.362 00:26:24.362 --- 10.0.0.2 ping statistics --- 00:26:24.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.362 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:26:24.362 12:20:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:26:24.362 00:26:24.362 --- 10.0.0.1 ping statistics --- 00:26:24.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.362 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:26:24.362 12:20:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.362 12:20:37 -- nvmf/common.sh@410 -- # return 0 00:26:24.362 12:20:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:24.362 12:20:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.362 12:20:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:24.362 12:20:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:24.362 12:20:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.362 12:20:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:24.362 12:20:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:24.362 12:20:37 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:24.362 12:20:37 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:24.362 12:20:37 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:24.362 12:20:37 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:24.362 net.core.busy_poll = 1 00:26:24.362 12:20:37 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:24.362 net.core.busy_read = 1 00:26:24.362 12:20:37 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:24.362 12:20:37 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:24.362 12:20:37 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:24.623 12:20:37 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:24.623 12:20:37 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:24.623 12:20:37 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:24.623 12:20:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:24.623 12:20:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:24.623 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:26:24.623 12:20:37 -- nvmf/common.sh@469 -- # nvmfpid=1598349 00:26:24.623 12:20:37 -- nvmf/common.sh@470 -- # waitforlisten 1598349 00:26:24.623 12:20:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:24.623 12:20:37 -- common/autotest_common.sh@819 -- # '[' -z 1598349 ']' 00:26:24.623 12:20:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.623 12:20:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:24.623 12:20:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.623 12:20:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:24.623 12:20:37 -- common/autotest_common.sh@10 -- # set +x 00:26:24.623 [2024-06-11 12:20:37.531084] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:24.623 [2024-06-11 12:20:37.531145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.623 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.623 [2024-06-11 12:20:37.607843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.623 [2024-06-11 12:20:37.644840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:24.623 [2024-06-11 12:20:37.644999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.623 [2024-06-11 12:20:37.645009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.623 [2024-06-11 12:20:37.645025] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.623 [2024-06-11 12:20:37.645078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.623 [2024-06-11 12:20:37.645106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.623 [2024-06-11 12:20:37.645266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.623 [2024-06-11 12:20:37.645266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.566 12:20:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:25.566 12:20:38 -- common/autotest_common.sh@852 -- # return 0 00:26:25.566 12:20:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:25.566 12:20:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:25.566 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.566 12:20:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.567 12:20:38 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:25.567 12:20:38 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:25.567 12:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.567 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.567 12:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.567 12:20:38 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:25.567 12:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.567 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.567 12:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.567 12:20:38 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:25.567 12:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.567 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.567 [2024-06-11 12:20:38.430947] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.567 12:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.567 12:20:38 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:25.567 12:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.567 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.567 Malloc1 00:26:25.567 12:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.567 12:20:38 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.567 12:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.567 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.567 12:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.567 12:20:38 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:25.567 12:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.567 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.567 12:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.567 12:20:38 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.567 12:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:25.567 12:20:38 -- common/autotest_common.sh@10 -- # set +x 00:26:25.567 [2024-06-11 12:20:38.486273] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.567 12:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:25.567 12:20:38 -- target/perf_adq.sh@94 -- # perfpid=1598587 00:26:25.567 12:20:38 -- target/perf_adq.sh@95 -- # sleep 2 00:26:25.567 12:20:38 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:25.567 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.481 12:20:40 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:27.481 12:20:40 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:27.481 12:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.481 12:20:40 -- common/autotest_common.sh@10 -- # set +x 00:26:27.481 12:20:40 -- target/perf_adq.sh@97 -- # wc -l 00:26:27.743 12:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.743 12:20:40 -- target/perf_adq.sh@97 -- # count=2 00:26:27.743 12:20:40 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:26:27.743 12:20:40 -- target/perf_adq.sh@103 -- # wait 1598587 00:26:35.877 Initializing NVMe Controllers 00:26:35.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:35.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:35.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:35.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:35.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:35.877 Initialization complete. Launching workers. 00:26:35.877 ======================================================== 00:26:35.877 Latency(us) 00:26:35.877 Device Information : IOPS MiB/s Average min max 00:26:35.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7891.73 30.83 8111.10 990.92 53781.99 00:26:35.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 20661.45 80.71 3097.12 944.09 45197.71 00:26:35.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7577.23 29.60 8446.64 1119.01 53225.81 00:26:35.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8010.32 31.29 7998.23 1058.74 54107.82 00:26:35.877 ======================================================== 00:26:35.877 Total : 44140.73 172.42 5801.27 944.09 54107.82 00:26:35.877 00:26:35.877 12:20:48 -- target/perf_adq.sh@104 -- # nvmftestfini 00:26:35.877 12:20:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:35.877 12:20:48 -- nvmf/common.sh@116 -- # sync 00:26:35.877 12:20:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:35.877 12:20:48 -- nvmf/common.sh@119 -- # set +e 00:26:35.877 12:20:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:35.877 12:20:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:35.877 rmmod nvme_tcp 00:26:35.877 rmmod nvme_fabrics 00:26:35.877 rmmod nvme_keyring 00:26:35.877 12:20:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:35.877 12:20:48 -- nvmf/common.sh@123 -- # set -e 00:26:35.877 12:20:48 -- nvmf/common.sh@124 -- # return 0 00:26:35.877 12:20:48 -- nvmf/common.sh@477 -- # '[' -n 1598349 ']' 00:26:35.877 12:20:48 -- nvmf/common.sh@478 -- # killprocess 1598349 00:26:35.877 12:20:48 -- common/autotest_common.sh@926 -- # '[' -z 1598349 ']' 00:26:35.877 12:20:48 -- common/autotest_common.sh@930 -- # kill -0 1598349 00:26:35.877 12:20:48 -- common/autotest_common.sh@931 -- # uname 00:26:35.877 12:20:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:35.877 12:20:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1598349 00:26:35.877 12:20:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:35.877 12:20:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:35.877 12:20:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1598349' 00:26:35.877 killing process with pid 1598349 00:26:35.877 12:20:48 -- common/autotest_common.sh@945 -- # kill 1598349 00:26:35.877 12:20:48 -- common/autotest_common.sh@950 -- # wait 1598349 00:26:35.877 12:20:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:35.877 12:20:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:35.877 12:20:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:35.877 12:20:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.877 12:20:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:35.877 12:20:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.877 12:20:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.878 12:20:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.423 12:20:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:38.423 12:20:50 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:26:38.423 00:26:38.423 real 0m51.659s 00:26:38.423 user 2m48.935s 00:26:38.423 sys 0m10.251s 00:26:38.423 12:20:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.423 12:20:50 -- common/autotest_common.sh@10 -- # set +x 00:26:38.423 ************************************ 00:26:38.423 END TEST nvmf_perf_adq 00:26:38.423 ************************************ 00:26:38.423 12:20:51 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:38.423 12:20:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:38.423 12:20:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.423 12:20:51 -- common/autotest_common.sh@10 -- # set +x 00:26:38.423 ************************************ 00:26:38.423 START TEST nvmf_shutdown 00:26:38.423 ************************************ 00:26:38.423 12:20:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:38.423 * Looking for test storage... 00:26:38.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:38.423 12:20:51 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.423 12:20:51 -- nvmf/common.sh@7 -- # uname -s 00:26:38.423 12:20:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.423 12:20:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.423 12:20:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.423 12:20:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.423 12:20:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.423 12:20:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.423 12:20:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.423 12:20:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.423 12:20:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.423 12:20:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.423 12:20:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:38.423 12:20:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:38.423 12:20:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.423 12:20:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.423 12:20:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.423 12:20:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.423 12:20:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.423 12:20:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.423 12:20:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.423 12:20:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.423 12:20:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.423 12:20:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.423 12:20:51 -- paths/export.sh@5 -- # export PATH 00:26:38.423 12:20:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.423 12:20:51 -- nvmf/common.sh@46 -- # : 0 00:26:38.424 12:20:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:38.424 12:20:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:38.424 12:20:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:38.424 12:20:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.424 12:20:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.424 12:20:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:38.424 12:20:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:38.424 12:20:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:38.424 12:20:51 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.424 12:20:51 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.424 12:20:51 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:38.424 12:20:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:38.424 12:20:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.424 12:20:51 -- common/autotest_common.sh@10 -- # set +x 00:26:38.424 ************************************ 00:26:38.424 START TEST nvmf_shutdown_tc1 00:26:38.424 ************************************ 00:26:38.424 12:20:51 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:26:38.424 12:20:51 -- target/shutdown.sh@74 -- # starttarget 00:26:38.424 12:20:51 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:38.424 12:20:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:38.424 12:20:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.424 12:20:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:38.424 12:20:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:38.424 12:20:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:38.424 12:20:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.424 12:20:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.424 12:20:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.424 12:20:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:38.424 12:20:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:38.424 12:20:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:38.424 12:20:51 -- common/autotest_common.sh@10 -- # set +x 00:26:46.569 12:20:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:46.569 12:20:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:46.569 12:20:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:46.569 12:20:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:46.569 12:20:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:46.569 12:20:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:46.569 12:20:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:46.569 12:20:58 -- nvmf/common.sh@294 -- # net_devs=() 00:26:46.570 12:20:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:46.570 12:20:58 -- nvmf/common.sh@295 -- # e810=() 00:26:46.570 12:20:58 -- nvmf/common.sh@295 -- # local -ga e810 00:26:46.570 12:20:58 -- nvmf/common.sh@296 -- # x722=() 00:26:46.570 12:20:58 -- nvmf/common.sh@296 -- # local -ga x722 00:26:46.570 12:20:58 -- nvmf/common.sh@297 -- # mlx=() 00:26:46.570 12:20:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:46.570 12:20:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.570 12:20:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:46.570 12:20:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:46.570 12:20:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:46.570 12:20:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:46.570 12:20:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:46.570 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:46.570 12:20:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:46.570 12:20:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:46.570 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:46.570 12:20:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:46.570 12:20:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:46.570 12:20:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.570 12:20:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:46.570 12:20:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.570 12:20:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:46.570 Found net devices under 0000:31:00.0: cvl_0_0 00:26:46.570 12:20:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.570 12:20:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:46.570 12:20:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.570 12:20:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:46.570 12:20:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.570 12:20:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:46.570 Found net devices under 0000:31:00.1: cvl_0_1 00:26:46.570 12:20:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.570 12:20:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:46.570 12:20:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:46.570 12:20:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:46.570 12:20:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.570 12:20:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.570 12:20:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.570 12:20:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:46.570 12:20:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.570 12:20:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.570 12:20:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:46.570 12:20:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.570 12:20:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.570 12:20:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:46.570 12:20:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:46.570 12:20:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.570 12:20:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.570 12:20:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.570 12:20:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.570 12:20:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:46.570 12:20:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.570 12:20:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.570 12:20:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.570 12:20:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:46.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:26:46.570 00:26:46.570 --- 10.0.0.2 ping statistics --- 00:26:46.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.570 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:26:46.570 12:20:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:26:46.570 00:26:46.570 --- 10.0.0.1 ping statistics --- 00:26:46.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.570 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:26:46.570 12:20:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.570 12:20:58 -- nvmf/common.sh@410 -- # return 0 00:26:46.570 12:20:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:46.570 12:20:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.570 12:20:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:46.570 12:20:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.570 12:20:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:46.570 12:20:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:46.570 12:20:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:46.570 12:20:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:46.570 12:20:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:46.570 12:20:58 -- common/autotest_common.sh@10 -- # set +x 00:26:46.570 12:20:58 -- nvmf/common.sh@469 -- # nvmfpid=1604815 00:26:46.570 12:20:58 -- nvmf/common.sh@470 -- # waitforlisten 1604815 00:26:46.570 12:20:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:46.570 12:20:58 -- common/autotest_common.sh@819 -- # '[' -z 1604815 ']' 00:26:46.570 12:20:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.570 12:20:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:46.570 12:20:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.570 12:20:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:46.570 12:20:58 -- common/autotest_common.sh@10 -- # set +x 00:26:46.570 [2024-06-11 12:20:58.592261] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:46.570 [2024-06-11 12:20:58.592328] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.570 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.570 [2024-06-11 12:20:58.682582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.570 [2024-06-11 12:20:58.728425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:46.570 [2024-06-11 12:20:58.728569] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.570 [2024-06-11 12:20:58.728578] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.570 [2024-06-11 12:20:58.728586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.570 [2024-06-11 12:20:58.728720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.570 [2024-06-11 12:20:58.728882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.570 [2024-06-11 12:20:58.729060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:46.570 [2024-06-11 12:20:58.729096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.570 12:20:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:46.570 12:20:59 -- common/autotest_common.sh@852 -- # return 0 00:26:46.570 12:20:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:46.570 12:20:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:46.570 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:26:46.570 12:20:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.570 12:20:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.570 12:20:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:46.570 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:26:46.570 [2024-06-11 12:20:59.414182] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.570 12:20:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:46.570 12:20:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:46.570 12:20:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:46.570 12:20:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:46.570 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:26:46.570 12:20:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.571 12:20:59 -- target/shutdown.sh@28 -- # cat 00:26:46.571 12:20:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:46.571 12:20:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:46.571 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:26:46.571 Malloc1 00:26:46.571 [2024-06-11 12:20:59.517630] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.571 Malloc2 00:26:46.571 Malloc3 00:26:46.858 Malloc4 00:26:46.858 Malloc5 00:26:46.858 Malloc6 00:26:46.858 Malloc7 00:26:46.858 Malloc8 00:26:46.858 Malloc9 00:26:46.858 Malloc10 00:26:47.144 12:20:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:47.144 12:20:59 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:47.144 12:20:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:47.144 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:26:47.144 12:20:59 -- target/shutdown.sh@78 -- # perfpid=1605203 00:26:47.144 12:20:59 -- target/shutdown.sh@79 -- # waitforlisten 1605203 /var/tmp/bdevperf.sock 00:26:47.144 12:20:59 -- common/autotest_common.sh@819 -- # '[' -z 1605203 ']' 00:26:47.144 12:20:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:47.144 12:20:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:47.144 12:20:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:47.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:47.144 12:20:59 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:47.144 12:20:59 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:47.144 12:20:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:47.144 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:26:47.144 12:20:59 -- nvmf/common.sh@520 -- # config=() 00:26:47.144 12:20:59 -- nvmf/common.sh@520 -- # local subsystem config 00:26:47.144 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.144 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.144 { 00:26:47.144 "params": { 00:26:47.144 "name": "Nvme$subsystem", 00:26:47.144 "trtype": "$TEST_TRANSPORT", 00:26:47.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 [2024-06-11 12:20:59.982110] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:47.145 [2024-06-11 12:20:59.982173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:20:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:47.145 12:20:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:47.145 { 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme$subsystem", 00:26:47.145 "trtype": "$TEST_TRANSPORT", 00:26:47.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "$NVMF_PORT", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.145 "hdgst": ${hdgst:-false}, 00:26:47.145 "ddgst": ${ddgst:-false} 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 } 00:26:47.145 EOF 00:26:47.145 )") 00:26:47.145 12:21:00 -- nvmf/common.sh@542 -- # cat 00:26:47.145 12:21:00 -- nvmf/common.sh@544 -- # jq . 00:26:47.145 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.145 12:21:00 -- nvmf/common.sh@545 -- # IFS=, 00:26:47.145 12:21:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme1", 00:26:47.145 "trtype": "tcp", 00:26:47.145 "traddr": "10.0.0.2", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "4420", 00:26:47.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:47.145 "hdgst": false, 00:26:47.145 "ddgst": false 00:26:47.145 }, 00:26:47.145 "method": "bdev_nvme_attach_controller" 00:26:47.145 },{ 00:26:47.145 "params": { 00:26:47.145 "name": "Nvme2", 00:26:47.145 "trtype": "tcp", 00:26:47.145 "traddr": "10.0.0.2", 00:26:47.145 "adrfam": "ipv4", 00:26:47.145 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme3", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme4", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme5", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme6", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme7", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme8", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme9", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 },{ 00:26:47.146 "params": { 00:26:47.146 "name": "Nvme10", 00:26:47.146 "trtype": "tcp", 00:26:47.146 "traddr": "10.0.0.2", 00:26:47.146 "adrfam": "ipv4", 00:26:47.146 "trsvcid": "4420", 00:26:47.146 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:47.146 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:47.146 "hdgst": false, 00:26:47.146 "ddgst": false 00:26:47.146 }, 00:26:47.146 "method": "bdev_nvme_attach_controller" 00:26:47.146 }' 00:26:47.146 [2024-06-11 12:21:00.045545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.146 [2024-06-11 12:21:00.076431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.531 12:21:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:48.531 12:21:01 -- common/autotest_common.sh@852 -- # return 0 00:26:48.531 12:21:01 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:48.531 12:21:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.531 12:21:01 -- common/autotest_common.sh@10 -- # set +x 00:26:48.531 12:21:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.531 12:21:01 -- target/shutdown.sh@83 -- # kill -9 1605203 00:26:48.531 12:21:01 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:48.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1605203 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:48.531 12:21:01 -- target/shutdown.sh@87 -- # sleep 1 00:26:49.472 12:21:02 -- target/shutdown.sh@88 -- # kill -0 1604815 00:26:49.472 12:21:02 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:49.472 12:21:02 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:49.472 12:21:02 -- nvmf/common.sh@520 -- # config=() 00:26:49.472 12:21:02 -- nvmf/common.sh@520 -- # local subsystem config 00:26:49.472 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.472 { 00:26:49.472 "params": { 00:26:49.472 "name": "Nvme$subsystem", 00:26:49.472 "trtype": "$TEST_TRANSPORT", 00:26:49.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.472 "adrfam": "ipv4", 00:26:49.472 "trsvcid": "$NVMF_PORT", 00:26:49.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.472 "hdgst": ${hdgst:-false}, 00:26:49.472 "ddgst": ${ddgst:-false} 00:26:49.472 }, 00:26:49.472 "method": "bdev_nvme_attach_controller" 00:26:49.472 } 00:26:49.472 EOF 00:26:49.472 )") 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.472 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.472 { 00:26:49.472 "params": { 00:26:49.472 "name": "Nvme$subsystem", 00:26:49.472 "trtype": "$TEST_TRANSPORT", 00:26:49.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.472 "adrfam": "ipv4", 00:26:49.472 "trsvcid": "$NVMF_PORT", 00:26:49.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.472 "hdgst": ${hdgst:-false}, 00:26:49.472 "ddgst": ${ddgst:-false} 00:26:49.472 }, 00:26:49.472 "method": "bdev_nvme_attach_controller" 00:26:49.472 } 00:26:49.472 EOF 00:26:49.472 )") 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.472 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.472 { 00:26:49.472 "params": { 00:26:49.472 "name": "Nvme$subsystem", 00:26:49.472 "trtype": "$TEST_TRANSPORT", 00:26:49.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.472 "adrfam": "ipv4", 00:26:49.472 "trsvcid": "$NVMF_PORT", 00:26:49.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.472 "hdgst": ${hdgst:-false}, 00:26:49.472 "ddgst": ${ddgst:-false} 00:26:49.472 }, 00:26:49.472 "method": "bdev_nvme_attach_controller" 00:26:49.472 } 00:26:49.472 EOF 00:26:49.472 )") 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.472 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.472 { 00:26:49.472 "params": { 00:26:49.472 "name": "Nvme$subsystem", 00:26:49.472 "trtype": "$TEST_TRANSPORT", 00:26:49.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.472 "adrfam": "ipv4", 00:26:49.472 "trsvcid": "$NVMF_PORT", 00:26:49.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.472 "hdgst": ${hdgst:-false}, 00:26:49.472 "ddgst": ${ddgst:-false} 00:26:49.472 }, 00:26:49.472 "method": "bdev_nvme_attach_controller" 00:26:49.472 } 00:26:49.472 EOF 00:26:49.472 )") 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.472 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.472 { 00:26:49.472 "params": { 00:26:49.472 "name": "Nvme$subsystem", 00:26:49.472 "trtype": "$TEST_TRANSPORT", 00:26:49.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.472 "adrfam": "ipv4", 00:26:49.472 "trsvcid": "$NVMF_PORT", 00:26:49.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.472 "hdgst": ${hdgst:-false}, 00:26:49.472 "ddgst": ${ddgst:-false} 00:26:49.472 }, 00:26:49.472 "method": "bdev_nvme_attach_controller" 00:26:49.472 } 00:26:49.472 EOF 00:26:49.472 )") 00:26:49.472 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.473 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.473 { 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme$subsystem", 00:26:49.473 "trtype": "$TEST_TRANSPORT", 00:26:49.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "$NVMF_PORT", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.473 "hdgst": ${hdgst:-false}, 00:26:49.473 "ddgst": ${ddgst:-false} 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 } 00:26:49.473 EOF 00:26:49.473 )") 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.473 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.473 { 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme$subsystem", 00:26:49.473 "trtype": "$TEST_TRANSPORT", 00:26:49.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "$NVMF_PORT", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.473 "hdgst": ${hdgst:-false}, 00:26:49.473 "ddgst": ${ddgst:-false} 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 } 00:26:49.473 EOF 00:26:49.473 )") 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.473 [2024-06-11 12:21:02.459162] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:49.473 [2024-06-11 12:21:02.459216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605823 ] 00:26:49.473 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.473 { 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme$subsystem", 00:26:49.473 "trtype": "$TEST_TRANSPORT", 00:26:49.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "$NVMF_PORT", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.473 "hdgst": ${hdgst:-false}, 00:26:49.473 "ddgst": ${ddgst:-false} 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 } 00:26:49.473 EOF 00:26:49.473 )") 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.473 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.473 { 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme$subsystem", 00:26:49.473 "trtype": "$TEST_TRANSPORT", 00:26:49.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "$NVMF_PORT", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.473 "hdgst": ${hdgst:-false}, 00:26:49.473 "ddgst": ${ddgst:-false} 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 } 00:26:49.473 EOF 00:26:49.473 )") 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.473 12:21:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.473 { 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme$subsystem", 00:26:49.473 "trtype": "$TEST_TRANSPORT", 00:26:49.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "$NVMF_PORT", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.473 "hdgst": ${hdgst:-false}, 00:26:49.473 "ddgst": ${ddgst:-false} 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 } 00:26:49.473 EOF 00:26:49.473 )") 00:26:49.473 12:21:02 -- nvmf/common.sh@542 -- # cat 00:26:49.473 12:21:02 -- nvmf/common.sh@544 -- # jq . 00:26:49.473 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.473 12:21:02 -- nvmf/common.sh@545 -- # IFS=, 00:26:49.473 12:21:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme1", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme2", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme3", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme4", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme5", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme6", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme7", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme8", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme9", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 },{ 00:26:49.473 "params": { 00:26:49.473 "name": "Nvme10", 00:26:49.473 "trtype": "tcp", 00:26:49.473 "traddr": "10.0.0.2", 00:26:49.473 "adrfam": "ipv4", 00:26:49.473 "trsvcid": "4420", 00:26:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:49.473 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:49.473 "hdgst": false, 00:26:49.473 "ddgst": false 00:26:49.473 }, 00:26:49.473 "method": "bdev_nvme_attach_controller" 00:26:49.473 }' 00:26:49.734 [2024-06-11 12:21:02.521353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.734 [2024-06-11 12:21:02.550209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.118 Running I/O for 1 seconds... 00:26:52.502 00:26:52.502 Latency(us) 00:26:52.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.502 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.502 Verification LBA range: start 0x0 length 0x400 00:26:52.502 Nvme1n1 : 1.09 402.50 25.16 0.00 0.00 156546.31 10922.67 168645.97 00:26:52.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme2n1 : 1.10 438.58 27.41 0.00 0.00 142883.29 15400.96 149422.08 00:26:52.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme3n1 : 1.08 403.62 25.23 0.00 0.00 152256.69 28835.84 142431.57 00:26:52.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme4n1 : 1.10 438.72 27.42 0.00 0.00 140972.92 10977.28 138936.32 00:26:52.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme5n1 : 1.08 402.45 25.15 0.00 0.00 150367.62 28835.84 135441.07 00:26:52.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme6n1 : 1.11 436.80 27.30 0.00 0.00 139294.47 14308.69 131072.00 00:26:52.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme7n1 : 1.11 436.74 27.30 0.00 0.00 138287.18 15619.41 131072.00 00:26:52.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme8n1 : 1.10 437.42 27.34 0.00 0.00 136880.57 15400.96 117964.80 00:26:52.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme9n1 : 1.11 434.37 27.15 0.00 0.00 137096.09 13981.01 131945.81 00:26:52.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:52.503 Verification LBA range: start 0x0 length 0x400 00:26:52.503 Nvme10n1 : 1.15 419.60 26.23 0.00 0.00 136000.66 14964.05 118838.61 00:26:52.503 =================================================================================================================== 00:26:52.503 Total : 4250.80 265.67 0.00 0.00 142755.39 10922.67 168645.97 00:26:52.503 12:21:05 -- target/shutdown.sh@93 -- # stoptarget 00:26:52.503 12:21:05 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:52.503 12:21:05 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:52.503 12:21:05 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:52.503 12:21:05 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:52.503 12:21:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:52.503 12:21:05 -- nvmf/common.sh@116 -- # sync 00:26:52.503 12:21:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:52.503 12:21:05 -- nvmf/common.sh@119 -- # set +e 00:26:52.503 12:21:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:52.503 12:21:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:52.503 rmmod nvme_tcp 00:26:52.503 rmmod nvme_fabrics 00:26:52.503 rmmod nvme_keyring 00:26:52.503 12:21:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:52.503 12:21:05 -- nvmf/common.sh@123 -- # set -e 00:26:52.503 12:21:05 -- nvmf/common.sh@124 -- # return 0 00:26:52.503 12:21:05 -- nvmf/common.sh@477 -- # '[' -n 1604815 ']' 00:26:52.503 12:21:05 -- nvmf/common.sh@478 -- # killprocess 1604815 00:26:52.503 12:21:05 -- common/autotest_common.sh@926 -- # '[' -z 1604815 ']' 00:26:52.503 12:21:05 -- common/autotest_common.sh@930 -- # kill -0 1604815 00:26:52.503 12:21:05 -- common/autotest_common.sh@931 -- # uname 00:26:52.503 12:21:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:52.503 12:21:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1604815 00:26:52.503 12:21:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:52.503 12:21:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:52.503 12:21:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1604815' 00:26:52.503 killing process with pid 1604815 00:26:52.503 12:21:05 -- common/autotest_common.sh@945 -- # kill 1604815 00:26:52.503 12:21:05 -- common/autotest_common.sh@950 -- # wait 1604815 00:26:52.763 12:21:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:52.763 12:21:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:52.763 12:21:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:52.763 12:21:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.763 12:21:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:52.763 12:21:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.763 12:21:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.763 12:21:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.311 12:21:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:55.311 00:26:55.311 real 0m16.657s 00:26:55.311 user 0m34.320s 00:26:55.311 sys 0m6.593s 00:26:55.311 12:21:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.311 12:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:55.311 ************************************ 00:26:55.311 END TEST nvmf_shutdown_tc1 00:26:55.311 ************************************ 00:26:55.311 12:21:07 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:55.311 12:21:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:55.311 12:21:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.311 12:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:55.311 ************************************ 00:26:55.311 START TEST nvmf_shutdown_tc2 00:26:55.311 ************************************ 00:26:55.311 12:21:07 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:26:55.311 12:21:07 -- target/shutdown.sh@98 -- # starttarget 00:26:55.311 12:21:07 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:55.311 12:21:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:55.311 12:21:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.311 12:21:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:55.311 12:21:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:55.311 12:21:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:55.311 12:21:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.311 12:21:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.311 12:21:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.311 12:21:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:55.311 12:21:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:55.311 12:21:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:55.311 12:21:07 -- common/autotest_common.sh@10 -- # set +x 00:26:55.311 12:21:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:55.311 12:21:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:55.312 12:21:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:55.312 12:21:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:55.312 12:21:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:55.312 12:21:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:55.312 12:21:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:55.312 12:21:07 -- nvmf/common.sh@294 -- # net_devs=() 00:26:55.312 12:21:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:55.312 12:21:07 -- nvmf/common.sh@295 -- # e810=() 00:26:55.312 12:21:07 -- nvmf/common.sh@295 -- # local -ga e810 00:26:55.312 12:21:07 -- nvmf/common.sh@296 -- # x722=() 00:26:55.312 12:21:07 -- nvmf/common.sh@296 -- # local -ga x722 00:26:55.312 12:21:07 -- nvmf/common.sh@297 -- # mlx=() 00:26:55.312 12:21:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:55.312 12:21:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.312 12:21:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:55.312 12:21:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:55.312 12:21:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:55.312 12:21:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:55.312 12:21:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:55.312 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:55.312 12:21:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:55.312 12:21:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:55.312 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:55.312 12:21:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:55.312 12:21:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:55.312 12:21:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.312 12:21:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:55.312 12:21:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.312 12:21:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:55.312 Found net devices under 0000:31:00.0: cvl_0_0 00:26:55.312 12:21:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.312 12:21:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:55.312 12:21:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.312 12:21:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:55.312 12:21:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.312 12:21:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:55.312 Found net devices under 0000:31:00.1: cvl_0_1 00:26:55.312 12:21:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.312 12:21:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:55.312 12:21:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:55.312 12:21:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:55.312 12:21:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:55.312 12:21:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.312 12:21:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.312 12:21:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.312 12:21:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:55.312 12:21:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.312 12:21:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.312 12:21:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:55.312 12:21:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.312 12:21:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.312 12:21:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:55.312 12:21:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:55.312 12:21:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.312 12:21:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.312 12:21:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.312 12:21:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.312 12:21:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:55.312 12:21:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.312 12:21:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.312 12:21:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.312 12:21:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:55.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:26:55.312 00:26:55.312 --- 10.0.0.2 ping statistics --- 00:26:55.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.312 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:26:55.312 12:21:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:26:55.312 00:26:55.312 --- 10.0.0.1 ping statistics --- 00:26:55.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.312 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:26:55.312 12:21:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.312 12:21:08 -- nvmf/common.sh@410 -- # return 0 00:26:55.312 12:21:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:55.312 12:21:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.312 12:21:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:55.312 12:21:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:55.312 12:21:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.312 12:21:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:55.312 12:21:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:55.312 12:21:08 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:55.312 12:21:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:55.312 12:21:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:55.312 12:21:08 -- common/autotest_common.sh@10 -- # set +x 00:26:55.312 12:21:08 -- nvmf/common.sh@469 -- # nvmfpid=1607135 00:26:55.312 12:21:08 -- nvmf/common.sh@470 -- # waitforlisten 1607135 00:26:55.312 12:21:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:55.312 12:21:08 -- common/autotest_common.sh@819 -- # '[' -z 1607135 ']' 00:26:55.312 12:21:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.312 12:21:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:55.312 12:21:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.312 12:21:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:55.312 12:21:08 -- common/autotest_common.sh@10 -- # set +x 00:26:55.312 [2024-06-11 12:21:08.320060] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:55.312 [2024-06-11 12:21:08.320125] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.606 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.606 [2024-06-11 12:21:08.409514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.606 [2024-06-11 12:21:08.441012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:55.606 [2024-06-11 12:21:08.441140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.606 [2024-06-11 12:21:08.441148] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.606 [2024-06-11 12:21:08.441154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.606 [2024-06-11 12:21:08.441287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.606 [2024-06-11 12:21:08.441446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.606 [2024-06-11 12:21:08.441604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.606 [2024-06-11 12:21:08.441607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:56.177 12:21:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:56.177 12:21:09 -- common/autotest_common.sh@852 -- # return 0 00:26:56.177 12:21:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:56.177 12:21:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:56.177 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.177 12:21:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.177 12:21:09 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.177 12:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.177 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.177 [2024-06-11 12:21:09.135101] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.177 12:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.177 12:21:09 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:56.177 12:21:09 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:56.177 12:21:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:56.177 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.177 12:21:09 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:56.177 12:21:09 -- target/shutdown.sh@28 -- # cat 00:26:56.177 12:21:09 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:56.177 12:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.177 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.438 Malloc1 00:26:56.438 [2024-06-11 12:21:09.233968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.438 Malloc2 00:26:56.438 Malloc3 00:26:56.438 Malloc4 00:26:56.438 Malloc5 00:26:56.438 Malloc6 00:26:56.438 Malloc7 00:26:56.698 Malloc8 00:26:56.699 Malloc9 00:26:56.699 Malloc10 00:26:56.699 12:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.699 12:21:09 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:56.699 12:21:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:56.699 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.699 12:21:09 -- target/shutdown.sh@102 -- # perfpid=1607727 00:26:56.699 12:21:09 -- target/shutdown.sh@103 -- # waitforlisten 1607727 /var/tmp/bdevperf.sock 00:26:56.699 12:21:09 -- common/autotest_common.sh@819 -- # '[' -z 1607727 ']' 00:26:56.699 12:21:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:56.699 12:21:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:56.699 12:21:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:56.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:56.699 12:21:09 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:56.699 12:21:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:56.699 12:21:09 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:56.699 12:21:09 -- common/autotest_common.sh@10 -- # set +x 00:26:56.699 12:21:09 -- nvmf/common.sh@520 -- # config=() 00:26:56.699 12:21:09 -- nvmf/common.sh@520 -- # local subsystem config 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 [2024-06-11 12:21:09.688088] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:56.699 [2024-06-11 12:21:09.688142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607727 ] 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.699 12:21:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:56.699 12:21:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:56.699 { 00:26:56.699 "params": { 00:26:56.699 "name": "Nvme$subsystem", 00:26:56.699 "trtype": "$TEST_TRANSPORT", 00:26:56.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.699 "adrfam": "ipv4", 00:26:56.699 "trsvcid": "$NVMF_PORT", 00:26:56.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.699 "hdgst": ${hdgst:-false}, 00:26:56.699 "ddgst": ${ddgst:-false} 00:26:56.699 }, 00:26:56.699 "method": "bdev_nvme_attach_controller" 00:26:56.699 } 00:26:56.699 EOF 00:26:56.699 )") 00:26:56.700 12:21:09 -- nvmf/common.sh@542 -- # cat 00:26:56.700 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.700 12:21:09 -- nvmf/common.sh@544 -- # jq . 00:26:56.700 12:21:09 -- nvmf/common.sh@545 -- # IFS=, 00:26:56.700 12:21:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme1", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme2", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme3", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme4", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme5", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme6", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme7", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme8", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme9", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 },{ 00:26:56.700 "params": { 00:26:56.700 "name": "Nvme10", 00:26:56.700 "trtype": "tcp", 00:26:56.700 "traddr": "10.0.0.2", 00:26:56.700 "adrfam": "ipv4", 00:26:56.700 "trsvcid": "4420", 00:26:56.700 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:56.700 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:56.700 "hdgst": false, 00:26:56.700 "ddgst": false 00:26:56.700 }, 00:26:56.700 "method": "bdev_nvme_attach_controller" 00:26:56.700 }' 00:26:56.960 [2024-06-11 12:21:09.748784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.960 [2024-06-11 12:21:09.778620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.871 Running I/O for 10 seconds... 00:26:58.871 12:21:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:58.871 12:21:11 -- common/autotest_common.sh@852 -- # return 0 00:26:58.871 12:21:11 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:58.871 12:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.871 12:21:11 -- common/autotest_common.sh@10 -- # set +x 00:26:58.871 12:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:58.871 12:21:11 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:58.871 12:21:11 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:58.871 12:21:11 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:58.871 12:21:11 -- target/shutdown.sh@57 -- # local ret=1 00:26:58.871 12:21:11 -- target/shutdown.sh@58 -- # local i 00:26:58.871 12:21:11 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:58.871 12:21:11 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:58.871 12:21:11 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:58.871 12:21:11 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:58.871 12:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.871 12:21:11 -- common/autotest_common.sh@10 -- # set +x 00:26:58.871 12:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:58.871 12:21:11 -- target/shutdown.sh@60 -- # read_io_count=87 00:26:58.871 12:21:11 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:26:58.871 12:21:11 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:59.131 12:21:12 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:59.131 12:21:12 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:59.131 12:21:12 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:59.131 12:21:12 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:59.131 12:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.131 12:21:12 -- common/autotest_common.sh@10 -- # set +x 00:26:59.131 12:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.390 12:21:12 -- target/shutdown.sh@60 -- # read_io_count=214 00:26:59.390 12:21:12 -- target/shutdown.sh@63 -- # '[' 214 -ge 100 ']' 00:26:59.390 12:21:12 -- target/shutdown.sh@64 -- # ret=0 00:26:59.390 12:21:12 -- target/shutdown.sh@65 -- # break 00:26:59.390 12:21:12 -- target/shutdown.sh@69 -- # return 0 00:26:59.390 12:21:12 -- target/shutdown.sh@109 -- # killprocess 1607727 00:26:59.390 12:21:12 -- common/autotest_common.sh@926 -- # '[' -z 1607727 ']' 00:26:59.390 12:21:12 -- common/autotest_common.sh@930 -- # kill -0 1607727 00:26:59.390 12:21:12 -- common/autotest_common.sh@931 -- # uname 00:26:59.390 12:21:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:59.390 12:21:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1607727 00:26:59.390 12:21:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:59.390 12:21:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:59.390 12:21:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1607727' 00:26:59.390 killing process with pid 1607727 00:26:59.390 12:21:12 -- common/autotest_common.sh@945 -- # kill 1607727 00:26:59.390 12:21:12 -- common/autotest_common.sh@950 -- # wait 1607727 00:26:59.390 Received shutdown signal, test time was about 0.815104 seconds 00:26:59.390 00:26:59.390 Latency(us) 00:26:59.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.390 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme1n1 : 0.81 390.56 24.41 0.00 0.00 153389.88 7591.25 161655.47 00:26:59.390 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme2n1 : 0.80 442.58 27.66 0.00 0.00 140226.58 15947.09 158160.21 00:26:59.390 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme3n1 : 0.80 446.25 27.89 0.00 0.00 137408.22 17148.59 145926.83 00:26:59.390 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme4n1 : 0.79 448.09 28.01 0.00 0.00 135409.84 17148.59 119712.43 00:26:59.390 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme5n1 : 0.79 449.87 28.12 0.00 0.00 133523.33 17585.49 134567.25 00:26:59.390 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme6n1 : 0.80 444.63 27.79 0.00 0.00 133747.13 16930.13 115343.36 00:26:59.390 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme7n1 : 0.80 443.54 27.72 0.00 0.00 132702.25 16384.00 110537.39 00:26:59.390 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme8n1 : 0.79 449.03 28.06 0.00 0.00 129394.06 16493.23 127576.75 00:26:59.390 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme9n1 : 0.80 441.35 27.58 0.00 0.00 130469.41 16056.32 113595.73 00:26:59.390 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:59.390 Verification LBA range: start 0x0 length 0x400 00:26:59.390 Nvme10n1 : 0.78 410.84 25.68 0.00 0.00 138205.87 4505.60 114469.55 00:26:59.390 =================================================================================================================== 00:26:59.390 Total : 4366.74 272.92 0.00 0.00 136248.16 4505.60 161655.47 00:26:59.650 12:21:12 -- target/shutdown.sh@112 -- # sleep 1 00:27:00.591 12:21:13 -- target/shutdown.sh@113 -- # kill -0 1607135 00:27:00.591 12:21:13 -- target/shutdown.sh@115 -- # stoptarget 00:27:00.591 12:21:13 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:00.591 12:21:13 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:00.591 12:21:13 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:00.591 12:21:13 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:00.591 12:21:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:00.591 12:21:13 -- nvmf/common.sh@116 -- # sync 00:27:00.591 12:21:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:00.591 12:21:13 -- nvmf/common.sh@119 -- # set +e 00:27:00.591 12:21:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:00.591 12:21:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:00.591 rmmod nvme_tcp 00:27:00.591 rmmod nvme_fabrics 00:27:00.591 rmmod nvme_keyring 00:27:00.591 12:21:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:00.591 12:21:13 -- nvmf/common.sh@123 -- # set -e 00:27:00.591 12:21:13 -- nvmf/common.sh@124 -- # return 0 00:27:00.591 12:21:13 -- nvmf/common.sh@477 -- # '[' -n 1607135 ']' 00:27:00.591 12:21:13 -- nvmf/common.sh@478 -- # killprocess 1607135 00:27:00.592 12:21:13 -- common/autotest_common.sh@926 -- # '[' -z 1607135 ']' 00:27:00.592 12:21:13 -- common/autotest_common.sh@930 -- # kill -0 1607135 00:27:00.592 12:21:13 -- common/autotest_common.sh@931 -- # uname 00:27:00.592 12:21:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:00.592 12:21:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1607135 00:27:00.592 12:21:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:00.592 12:21:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:00.592 12:21:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1607135' 00:27:00.592 killing process with pid 1607135 00:27:00.592 12:21:13 -- common/autotest_common.sh@945 -- # kill 1607135 00:27:00.592 12:21:13 -- common/autotest_common.sh@950 -- # wait 1607135 00:27:00.852 12:21:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:00.852 12:21:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:00.852 12:21:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:00.852 12:21:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.852 12:21:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:00.852 12:21:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.852 12:21:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.852 12:21:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.394 12:21:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:03.394 00:27:03.394 real 0m7.987s 00:27:03.394 user 0m24.429s 00:27:03.394 sys 0m1.283s 00:27:03.394 12:21:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.394 12:21:15 -- common/autotest_common.sh@10 -- # set +x 00:27:03.394 ************************************ 00:27:03.394 END TEST nvmf_shutdown_tc2 00:27:03.394 ************************************ 00:27:03.394 12:21:15 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:03.394 12:21:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:03.394 12:21:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.394 12:21:15 -- common/autotest_common.sh@10 -- # set +x 00:27:03.394 ************************************ 00:27:03.394 START TEST nvmf_shutdown_tc3 00:27:03.394 ************************************ 00:27:03.394 12:21:15 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:27:03.394 12:21:15 -- target/shutdown.sh@120 -- # starttarget 00:27:03.394 12:21:15 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:03.394 12:21:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:03.394 12:21:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.394 12:21:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:03.394 12:21:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:03.394 12:21:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:03.394 12:21:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.394 12:21:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.394 12:21:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.394 12:21:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:03.394 12:21:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:03.394 12:21:15 -- common/autotest_common.sh@10 -- # set +x 00:27:03.394 12:21:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:03.394 12:21:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:03.394 12:21:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:03.394 12:21:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:03.394 12:21:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:03.394 12:21:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:03.394 12:21:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:03.394 12:21:15 -- nvmf/common.sh@294 -- # net_devs=() 00:27:03.394 12:21:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:03.394 12:21:15 -- nvmf/common.sh@295 -- # e810=() 00:27:03.394 12:21:15 -- nvmf/common.sh@295 -- # local -ga e810 00:27:03.394 12:21:15 -- nvmf/common.sh@296 -- # x722=() 00:27:03.394 12:21:15 -- nvmf/common.sh@296 -- # local -ga x722 00:27:03.394 12:21:15 -- nvmf/common.sh@297 -- # mlx=() 00:27:03.394 12:21:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:03.394 12:21:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.394 12:21:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:03.394 12:21:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:03.394 12:21:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:03.394 12:21:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:03.394 12:21:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:03.394 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:03.394 12:21:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:03.394 12:21:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:03.394 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:03.394 12:21:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:03.394 12:21:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:03.394 12:21:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.394 12:21:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:03.394 12:21:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.394 12:21:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:03.394 Found net devices under 0000:31:00.0: cvl_0_0 00:27:03.394 12:21:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.394 12:21:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:03.394 12:21:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.394 12:21:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:03.394 12:21:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.394 12:21:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:03.394 Found net devices under 0000:31:00.1: cvl_0_1 00:27:03.394 12:21:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.394 12:21:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:03.394 12:21:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:03.394 12:21:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:03.394 12:21:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:03.394 12:21:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.394 12:21:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.394 12:21:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.394 12:21:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:03.394 12:21:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.394 12:21:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.394 12:21:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:03.394 12:21:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.394 12:21:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.394 12:21:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:03.394 12:21:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:03.394 12:21:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.394 12:21:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.394 12:21:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.394 12:21:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.394 12:21:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:03.394 12:21:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.394 12:21:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.394 12:21:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.394 12:21:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:03.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:27:03.394 00:27:03.394 --- 10.0.0.2 ping statistics --- 00:27:03.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.394 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:27:03.394 12:21:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:27:03.394 00:27:03.394 --- 10.0.0.1 ping statistics --- 00:27:03.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.394 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:27:03.394 12:21:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.394 12:21:16 -- nvmf/common.sh@410 -- # return 0 00:27:03.394 12:21:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:03.395 12:21:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.395 12:21:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:03.395 12:21:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:03.395 12:21:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.395 12:21:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:03.395 12:21:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:03.395 12:21:16 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:03.395 12:21:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:03.395 12:21:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:03.395 12:21:16 -- common/autotest_common.sh@10 -- # set +x 00:27:03.395 12:21:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:03.395 12:21:16 -- nvmf/common.sh@469 -- # nvmfpid=1609152 00:27:03.395 12:21:16 -- nvmf/common.sh@470 -- # waitforlisten 1609152 00:27:03.395 12:21:16 -- common/autotest_common.sh@819 -- # '[' -z 1609152 ']' 00:27:03.395 12:21:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.395 12:21:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:03.395 12:21:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.395 12:21:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:03.395 12:21:16 -- common/autotest_common.sh@10 -- # set +x 00:27:03.395 [2024-06-11 12:21:16.349838] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:03.395 [2024-06-11 12:21:16.349907] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.395 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.654 [2024-06-11 12:21:16.445155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.654 [2024-06-11 12:21:16.477238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:03.654 [2024-06-11 12:21:16.477357] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.654 [2024-06-11 12:21:16.477365] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.654 [2024-06-11 12:21:16.477371] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.654 [2024-06-11 12:21:16.477501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.654 [2024-06-11 12:21:16.477627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.654 [2024-06-11 12:21:16.477786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.654 [2024-06-11 12:21:16.477788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:04.225 12:21:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:04.225 12:21:17 -- common/autotest_common.sh@852 -- # return 0 00:27:04.225 12:21:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:04.225 12:21:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:04.225 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:27:04.225 12:21:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.225 12:21:17 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:04.225 12:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.225 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:27:04.225 [2024-06-11 12:21:17.161078] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.225 12:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.225 12:21:17 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:04.225 12:21:17 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:04.225 12:21:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:04.225 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:27:04.225 12:21:17 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:04.225 12:21:17 -- target/shutdown.sh@28 -- # cat 00:27:04.225 12:21:17 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:04.225 12:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.225 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:27:04.225 Malloc1 00:27:04.487 [2024-06-11 12:21:17.259900] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.487 Malloc2 00:27:04.487 Malloc3 00:27:04.487 Malloc4 00:27:04.487 Malloc5 00:27:04.487 Malloc6 00:27:04.487 Malloc7 00:27:04.487 Malloc8 00:27:04.749 Malloc9 00:27:04.749 Malloc10 00:27:04.749 12:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.749 12:21:17 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:04.749 12:21:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:04.749 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:27:04.749 12:21:17 -- target/shutdown.sh@124 -- # perfpid=1609511 00:27:04.749 12:21:17 -- target/shutdown.sh@125 -- # waitforlisten 1609511 /var/tmp/bdevperf.sock 00:27:04.749 12:21:17 -- common/autotest_common.sh@819 -- # '[' -z 1609511 ']' 00:27:04.749 12:21:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:04.749 12:21:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:04.749 12:21:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:04.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:04.749 12:21:17 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:04.749 12:21:17 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:04.749 12:21:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:04.749 12:21:17 -- common/autotest_common.sh@10 -- # set +x 00:27:04.749 12:21:17 -- nvmf/common.sh@520 -- # config=() 00:27:04.749 12:21:17 -- nvmf/common.sh@520 -- # local subsystem config 00:27:04.749 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.749 { 00:27:04.749 "params": { 00:27:04.749 "name": "Nvme$subsystem", 00:27:04.749 "trtype": "$TEST_TRANSPORT", 00:27:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.749 "adrfam": "ipv4", 00:27:04.749 "trsvcid": "$NVMF_PORT", 00:27:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.749 "hdgst": ${hdgst:-false}, 00:27:04.749 "ddgst": ${ddgst:-false} 00:27:04.749 }, 00:27:04.749 "method": "bdev_nvme_attach_controller" 00:27:04.749 } 00:27:04.749 EOF 00:27:04.749 )") 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.749 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.749 { 00:27:04.749 "params": { 00:27:04.749 "name": "Nvme$subsystem", 00:27:04.749 "trtype": "$TEST_TRANSPORT", 00:27:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.749 "adrfam": "ipv4", 00:27:04.749 "trsvcid": "$NVMF_PORT", 00:27:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.749 "hdgst": ${hdgst:-false}, 00:27:04.749 "ddgst": ${ddgst:-false} 00:27:04.749 }, 00:27:04.749 "method": "bdev_nvme_attach_controller" 00:27:04.749 } 00:27:04.749 EOF 00:27:04.749 )") 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.749 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.749 { 00:27:04.749 "params": { 00:27:04.749 "name": "Nvme$subsystem", 00:27:04.749 "trtype": "$TEST_TRANSPORT", 00:27:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.749 "adrfam": "ipv4", 00:27:04.749 "trsvcid": "$NVMF_PORT", 00:27:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.749 "hdgst": ${hdgst:-false}, 00:27:04.749 "ddgst": ${ddgst:-false} 00:27:04.749 }, 00:27:04.749 "method": "bdev_nvme_attach_controller" 00:27:04.749 } 00:27:04.749 EOF 00:27:04.749 )") 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.749 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.749 { 00:27:04.749 "params": { 00:27:04.749 "name": "Nvme$subsystem", 00:27:04.749 "trtype": "$TEST_TRANSPORT", 00:27:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.749 "adrfam": "ipv4", 00:27:04.749 "trsvcid": "$NVMF_PORT", 00:27:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.749 "hdgst": ${hdgst:-false}, 00:27:04.749 "ddgst": ${ddgst:-false} 00:27:04.749 }, 00:27:04.749 "method": "bdev_nvme_attach_controller" 00:27:04.749 } 00:27:04.749 EOF 00:27:04.749 )") 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.749 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.749 { 00:27:04.749 "params": { 00:27:04.749 "name": "Nvme$subsystem", 00:27:04.749 "trtype": "$TEST_TRANSPORT", 00:27:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.749 "adrfam": "ipv4", 00:27:04.749 "trsvcid": "$NVMF_PORT", 00:27:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.749 "hdgst": ${hdgst:-false}, 00:27:04.749 "ddgst": ${ddgst:-false} 00:27:04.749 }, 00:27:04.749 "method": "bdev_nvme_attach_controller" 00:27:04.749 } 00:27:04.749 EOF 00:27:04.749 )") 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.749 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.749 { 00:27:04.749 "params": { 00:27:04.749 "name": "Nvme$subsystem", 00:27:04.749 "trtype": "$TEST_TRANSPORT", 00:27:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.749 "adrfam": "ipv4", 00:27:04.749 "trsvcid": "$NVMF_PORT", 00:27:04.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.749 "hdgst": ${hdgst:-false}, 00:27:04.749 "ddgst": ${ddgst:-false} 00:27:04.749 }, 00:27:04.749 "method": "bdev_nvme_attach_controller" 00:27:04.749 } 00:27:04.749 EOF 00:27:04.749 )") 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.749 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.749 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.749 { 00:27:04.749 "params": { 00:27:04.749 "name": "Nvme$subsystem", 00:27:04.749 "trtype": "$TEST_TRANSPORT", 00:27:04.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.749 "adrfam": "ipv4", 00:27:04.749 "trsvcid": "$NVMF_PORT", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.750 "hdgst": ${hdgst:-false}, 00:27:04.750 "ddgst": ${ddgst:-false} 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 } 00:27:04.750 EOF 00:27:04.750 )") 00:27:04.750 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.750 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.750 [2024-06-11 12:21:17.710573] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:04.750 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.750 { 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme$subsystem", 00:27:04.750 "trtype": "$TEST_TRANSPORT", 00:27:04.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "$NVMF_PORT", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.750 "hdgst": ${hdgst:-false}, 00:27:04.750 "ddgst": ${ddgst:-false} 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 } 00:27:04.750 EOF 00:27:04.750 )") 00:27:04.750 [2024-06-11 12:21:17.710641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609511 ] 00:27:04.750 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.750 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.750 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.750 { 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme$subsystem", 00:27:04.750 "trtype": "$TEST_TRANSPORT", 00:27:04.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "$NVMF_PORT", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.750 "hdgst": ${hdgst:-false}, 00:27:04.750 "ddgst": ${ddgst:-false} 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 } 00:27:04.750 EOF 00:27:04.750 )") 00:27:04.750 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.750 12:21:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:04.750 12:21:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:04.750 { 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme$subsystem", 00:27:04.750 "trtype": "$TEST_TRANSPORT", 00:27:04.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "$NVMF_PORT", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.750 "hdgst": ${hdgst:-false}, 00:27:04.750 "ddgst": ${ddgst:-false} 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 } 00:27:04.750 EOF 00:27:04.750 )") 00:27:04.750 12:21:17 -- nvmf/common.sh@542 -- # cat 00:27:04.750 12:21:17 -- nvmf/common.sh@544 -- # jq . 00:27:04.750 12:21:17 -- nvmf/common.sh@545 -- # IFS=, 00:27:04.750 12:21:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme1", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme2", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme3", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme4", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme5", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme6", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme7", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme8", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme9", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 },{ 00:27:04.750 "params": { 00:27:04.750 "name": "Nvme10", 00:27:04.750 "trtype": "tcp", 00:27:04.750 "traddr": "10.0.0.2", 00:27:04.750 "adrfam": "ipv4", 00:27:04.750 "trsvcid": "4420", 00:27:04.750 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:04.750 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:04.750 "hdgst": false, 00:27:04.750 "ddgst": false 00:27:04.750 }, 00:27:04.750 "method": "bdev_nvme_attach_controller" 00:27:04.750 }' 00:27:04.750 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.750 [2024-06-11 12:21:17.771260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.012 [2024-06-11 12:21:17.800379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.397 Running I/O for 10 seconds... 00:27:06.987 12:21:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:06.987 12:21:19 -- common/autotest_common.sh@852 -- # return 0 00:27:06.987 12:21:19 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:06.987 12:21:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:06.987 12:21:19 -- common/autotest_common.sh@10 -- # set +x 00:27:06.987 12:21:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:06.987 12:21:19 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:06.987 12:21:19 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:06.987 12:21:19 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:06.987 12:21:19 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:06.987 12:21:19 -- target/shutdown.sh@57 -- # local ret=1 00:27:06.987 12:21:19 -- target/shutdown.sh@58 -- # local i 00:27:06.987 12:21:19 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:06.987 12:21:19 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:06.987 12:21:19 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:06.987 12:21:19 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:06.987 12:21:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:06.987 12:21:19 -- common/autotest_common.sh@10 -- # set +x 00:27:06.987 12:21:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:06.987 12:21:19 -- target/shutdown.sh@60 -- # read_io_count=167 00:27:06.987 12:21:19 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:27:06.987 12:21:19 -- target/shutdown.sh@64 -- # ret=0 00:27:06.987 12:21:19 -- target/shutdown.sh@65 -- # break 00:27:06.987 12:21:19 -- target/shutdown.sh@69 -- # return 0 00:27:06.987 12:21:19 -- target/shutdown.sh@134 -- # killprocess 1609152 00:27:06.987 12:21:19 -- common/autotest_common.sh@926 -- # '[' -z 1609152 ']' 00:27:06.987 12:21:19 -- common/autotest_common.sh@930 -- # kill -0 1609152 00:27:06.987 12:21:19 -- common/autotest_common.sh@931 -- # uname 00:27:06.987 12:21:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:06.987 12:21:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1609152 00:27:06.987 12:21:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:06.987 12:21:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:06.987 12:21:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1609152' 00:27:06.987 killing process with pid 1609152 00:27:06.987 12:21:19 -- common/autotest_common.sh@945 -- # kill 1609152 00:27:06.987 12:21:19 -- common/autotest_common.sh@950 -- # wait 1609152 00:27:06.987 [2024-06-11 12:21:19.890912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.890978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.890984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.890990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.890994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.891000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.891004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.891009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.891013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.891023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.987 [2024-06-11 12:21:19.891028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.891279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2c10 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.892966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.892992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.892998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.893096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c55e0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.988 [2024-06-11 12:21:19.894243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.894432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c30c0 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.989 [2024-06-11 12:21:19.896307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.896388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3570 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3a00 is same with the state(5) to be set 00:27:06.990 [2024-06-11 12:21:19.897718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.990 [2024-06-11 12:21:19.897755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577510 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.897842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403630 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.897925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.897984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.897991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d72c0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e49b0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.991 [2024-06-11 12:21:19.898183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.991 [2024-06-11 12:21:19.898181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fa430 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.991 [2024-06-11 12:21:19.898383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c3eb0 is same with the state(5) to be set 00:27:06.992 [2024-06-11 12:21:19.898656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.992 [2024-06-11 12:21:19.898971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.992 [2024-06-11 12:21:19.898979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.898988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.898996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with t[2024-06-11 12:21:19.899386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:06.993 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-11 12:21:19.899407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 he state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-11 12:21:19.899426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 he state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.993 [2024-06-11 12:21:19.899477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.993 [2024-06-11 12:21:19.899488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.993 [2024-06-11 12:21:19.899494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with t[2024-06-11 12:21:19.899503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:06.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with t[2024-06-11 12:21:19.899515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32768 len:1he state(5) to be set 00:27:06.994 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with t[2024-06-11 12:21:19.899543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:06.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33024 len:1[2024-06-11 12:21:19.899555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 he state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with t[2024-06-11 12:21:19.899702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34048 len:12he state(5) to be set 00:27:06.994 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4360 is same with the state(5) to be set 00:27:06.994 [2024-06-11 12:21:19.899741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.994 [2024-06-11 12:21:19.899791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.994 [2024-06-11 12:21:19.899798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.995 [2024-06-11 12:21:19.900108] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1525130 was disconnected and freed. reset controller. 00:27:06.995 [2024-06-11 12:21:19.900582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.900831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4810 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.995 [2024-06-11 12:21:19.901590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.996 [2024-06-11 12:21:19.901728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:06.996 [2024-06-11 12:21:19.901757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577510 (9): Bad file descriptor 00:27:06.996 [2024-06-11 12:21:19.902283] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:06.996 [2024-06-11 12:21:19.903382] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:06.996 [2024-06-11 12:21:19.903427] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:06.996 [2024-06-11 12:21:19.903461] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:06.996 [2024-06-11 12:21:19.903572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.996 [2024-06-11 12:21:19.903957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-06-11 12:21:19.903964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.903973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.903980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.903989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.903997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.904490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.997 [2024-06-11 12:21:19.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-06-11 12:21:19.917547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.997 [2024-06-11 12:21:19.917635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.917692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c4ca0 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.918619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5130 is same with the state(5) to be set 00:27:06.998 [2024-06-11 12:21:19.921038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.998 [2024-06-11 12:21:19.921074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.998 [2024-06-11 12:21:19.921086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.998 [2024-06-11 12:21:19.921097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.998 [2024-06-11 12:21:19.921107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.999 [2024-06-11 12:21:19.921325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921386] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15227e0 was disconnected and freed. reset controller. 00:27:06.999 [2024-06-11 12:21:19.921559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159dde0 is same with the state(5) to be set 00:27:06.999 [2024-06-11 12:21:19.921657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15832b0 is same with the state(5) to be set 00:27:06.999 [2024-06-11 12:21:19.921753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575650 is same with the state(5) to be set 00:27:06.999 [2024-06-11 12:21:19.921839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403630 (9): Bad file descriptor 00:27:06.999 [2024-06-11 12:21:19.921856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d72c0 (9): Bad file descriptor 00:27:06.999 [2024-06-11 12:21:19.921882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.999 [2024-06-11 12:21:19.921940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.999 [2024-06-11 12:21:19.921947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fb180 is same with the state(5) to be set 00:27:07.000 [2024-06-11 12:21:19.921971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.000 [2024-06-11 12:21:19.921980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.921991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.000 [2024-06-11 12:21:19.921999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.922008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.000 [2024-06-11 12:21:19.922015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.922032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.000 [2024-06-11 12:21:19.922040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.922047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156b350 is same with the state(5) to be set 00:27:07.000 [2024-06-11 12:21:19.922058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e49b0 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.922073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fa430 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.922089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577510 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.922180] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:07.000 [2024-06-11 12:21:19.922222] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:07.000 [2024-06-11 12:21:19.923514] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:07.000 [2024-06-11 12:21:19.923538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156b350 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.923688] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:07.000 [2024-06-11 12:21:19.923701] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:07.000 [2024-06-11 12:21:19.923710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:07.000 [2024-06-11 12:21:19.924091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.000 [2024-06-11 12:21:19.924417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-06-11 12:21:19.924645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.000 [2024-06-11 12:21:19.924659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x156b350 with addr=10.0.0.2, port=4420 00:27:07.000 [2024-06-11 12:21:19.924669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156b350 is same with the state(5) to be set 00:27:07.000 [2024-06-11 12:21:19.924758] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:07.000 [2024-06-11 12:21:19.924798] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:07.000 [2024-06-11 12:21:19.924815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156b350 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.924878] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:07.000 [2024-06-11 12:21:19.924888] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:07.000 [2024-06-11 12:21:19.924897] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:07.000 [2024-06-11 12:21:19.924946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.000 [2024-06-11 12:21:19.931540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159dde0 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.931571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15832b0 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.931588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575650 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.931616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fb180 (9): Bad file descriptor 00:27:07.000 [2024-06-11 12:21:19.931736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.931988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.931998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.000 [2024-06-11 12:21:19.932150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.000 [2024-06-11 12:21:19.932160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.001 [2024-06-11 12:21:19.932747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.001 [2024-06-11 12:21:19.932756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.932763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.932772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.932780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.932789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.932797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.932806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.932814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.932829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.932837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.932846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.932853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.932864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.932871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.932880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce420 is same with the state(5) to be set 00:27:07.002 [2024-06-11 12:21:19.934188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.002 [2024-06-11 12:21:19.934629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.002 [2024-06-11 12:21:19.934637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.934985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.934993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.003 [2024-06-11 12:21:19.935253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.003 [2024-06-11 12:21:19.935260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.935270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.935277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.935286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.935294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.935304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.935311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.935321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.935329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.935338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf7e0 is same with the state(5) to be set 00:27:07.004 [2024-06-11 12:21:19.936630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.936987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.936998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.004 [2024-06-11 12:21:19.937239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.004 [2024-06-11 12:21:19.937248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.937765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.937773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0d30 is same with the state(5) to be set 00:27:07.005 [2024-06-11 12:21:19.939072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.005 [2024-06-11 12:21:19.939243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.005 [2024-06-11 12:21:19.939251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.006 [2024-06-11 12:21:19.939820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.006 [2024-06-11 12:21:19.939827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.939984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.939994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.940200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.940208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d22f0 is same with the state(5) to be set 00:27:07.007 [2024-06-11 12:21:19.941500] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:07.007 [2024-06-11 12:21:19.941516] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.007 [2024-06-11 12:21:19.941525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:07.007 [2024-06-11 12:21:19.941535] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:07.007 [2024-06-11 12:21:19.941647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:07.007 [2024-06-11 12:21:19.942246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.942617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.942631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1577510 with addr=10.0.0.2, port=4420 00:27:07.007 [2024-06-11 12:21:19.942641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577510 is same with the state(5) to be set 00:27:07.007 [2024-06-11 12:21:19.942858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.943263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.943302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d72c0 with addr=10.0.0.2, port=4420 00:27:07.007 [2024-06-11 12:21:19.943314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d72c0 is same with the state(5) to be set 00:27:07.007 [2024-06-11 12:21:19.943677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.944050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.944071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e49b0 with addr=10.0.0.2, port=4420 00:27:07.007 [2024-06-11 12:21:19.944080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e49b0 is same with the state(5) to be set 00:27:07.007 [2024-06-11 12:21:19.944261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.944568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.944579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1403630 with addr=10.0.0.2, port=4420 00:27:07.007 [2024-06-11 12:21:19.944586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403630 is same with the state(5) to be set 00:27:07.007 [2024-06-11 12:21:19.945699] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:07.007 [2024-06-11 12:21:19.946042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.946401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.007 [2024-06-11 12:21:19.946412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fa430 with addr=10.0.0.2, port=4420 00:27:07.007 [2024-06-11 12:21:19.946420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fa430 is same with the state(5) to be set 00:27:07.007 [2024-06-11 12:21:19.946431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577510 (9): Bad file descriptor 00:27:07.007 [2024-06-11 12:21:19.946441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d72c0 (9): Bad file descriptor 00:27:07.007 [2024-06-11 12:21:19.946450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e49b0 (9): Bad file descriptor 00:27:07.007 [2024-06-11 12:21:19.946459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403630 (9): Bad file descriptor 00:27:07.007 [2024-06-11 12:21:19.946541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.946553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.946568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.946578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.946589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.946597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.946608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.946615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.946631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.007 [2024-06-11 12:21:19.946638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.007 [2024-06-11 12:21:19.946649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.946987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.946994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.008 [2024-06-11 12:21:19.947353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.008 [2024-06-11 12:21:19.947362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.947688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.947697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151fc20 is same with the state(5) to be set 00:27:07.009 [2024-06-11 12:21:19.949005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.009 [2024-06-11 12:21:19.949352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.009 [2024-06-11 12:21:19.949359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.949986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.949997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.950004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.010 [2024-06-11 12:21:19.950013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.010 [2024-06-11 12:21:19.950026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.950036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.950043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.950053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.950059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.950069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.950076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.950084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1521200 is same with the state(5) to be set 00:27:07.011 [2024-06-11 12:21:19.951332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.011 [2024-06-11 12:21:19.951861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.011 [2024-06-11 12:21:19.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.951889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.951906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.951924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.951940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.951958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.951975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.951992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.951999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.952456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.952464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1523dc0 is same with the state(5) to be set 00:27:07.012 [2024-06-11 12:21:19.953749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.953761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.953773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.953782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.953791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.953799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.953809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.953817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.953827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.012 [2024-06-11 12:21:19.953835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.012 [2024-06-11 12:21:19.953844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.953987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.953995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.013 [2024-06-11 12:21:19.954447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.013 [2024-06-11 12:21:19.954454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.014 [2024-06-11 12:21:19.954875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.014 [2024-06-11 12:21:19.954883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149ef20 is same with the state(5) to be set 00:27:07.014 [2024-06-11 12:21:19.956371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:07.014 [2024-06-11 12:21:19.956393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:07.014 [2024-06-11 12:21:19.956403] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:07.014 task offset: 28928 on job bdev=Nvme10n1 fails 00:27:07.014 00:27:07.014 Latency(us) 00:27:07.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.014 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme1n1 ended in about 0.62 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme1n1 : 0.62 336.30 21.02 103.48 0.00 144346.25 69468.16 150295.89 00:27:07.014 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme2n1 ended in about 0.62 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme2n1 : 0.62 334.98 20.94 103.07 0.00 142976.10 67283.63 126702.93 00:27:07.014 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme3n1 ended in about 0.62 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme3n1 : 0.62 333.67 20.85 102.67 0.00 141680.64 72089.60 131072.00 00:27:07.014 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme4n1 ended in about 0.63 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme4n1 : 0.63 332.38 20.77 102.27 0.00 140361.09 74274.13 131945.81 00:27:07.014 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme5n1 ended in about 0.63 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme5n1 : 0.63 328.45 20.53 101.06 0.00 140251.86 75584.85 132819.63 00:27:07.014 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme6n1 ended in about 0.64 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme6n1 : 0.64 327.23 20.45 100.69 0.00 138937.93 72089.60 123207.68 00:27:07.014 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme7n1 ended in about 0.61 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme7n1 : 0.61 342.19 21.39 105.29 0.00 130509.00 63351.47 119712.43 00:27:07.014 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme8n1 ended in about 0.64 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme8n1 : 0.64 326.01 20.38 100.31 0.00 135730.20 88255.15 102236.16 00:27:07.014 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme9n1 ended in about 0.64 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme9n1 : 0.64 324.78 20.30 99.93 0.00 134503.81 69031.25 114469.55 00:27:07.014 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.014 Job: Nvme10n1 ended in about 0.59 seconds with error 00:27:07.014 Verification LBA range: start 0x0 length 0x400 00:27:07.014 Nvme10n1 : 0.59 354.85 22.18 109.18 0.00 119949.20 3167.57 114469.55 00:27:07.014 =================================================================================================================== 00:27:07.014 Total : 3340.85 208.80 1027.95 0.00 136924.61 3167.57 150295.89 00:27:07.014 [2024-06-11 12:21:19.982570] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:07.014 [2024-06-11 12:21:19.982615] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:07.014 [2024-06-11 12:21:19.983030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.983379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.983391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x156b350 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.983403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156b350 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.983418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fa430 (9): Bad file descriptor 00:27:07.015 [2024-06-11 12:21:19.983428] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.983435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.983444] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:07.015 [2024-06-11 12:21:19.983461] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.983467] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.983475] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.015 [2024-06-11 12:21:19.983486] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.983494] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.983502] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:07.015 [2024-06-11 12:21:19.983513] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.983519] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.983526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:07.015 [2024-06-11 12:21:19.983557] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.015 [2024-06-11 12:21:19.983570] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.015 [2024-06-11 12:21:19.983584] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.015 [2024-06-11 12:21:19.983598] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.015 [2024-06-11 12:21:19.983614] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.015 [2024-06-11 12:21:19.983632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156b350 (9): Bad file descriptor 00:27:07.015 [2024-06-11 12:21:19.983749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.983760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.983767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.983774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.984094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.984481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.984493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159dde0 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.984502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159dde0 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.984854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.985220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.985232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fb180 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.985240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fb180 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.985543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.985860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.985870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1575650 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.985878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575650 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.986074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.986460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.986471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15832b0 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.986478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15832b0 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.986487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.986494] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.986501] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:07.015 [2024-06-11 12:21:19.986543] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.015 [2024-06-11 12:21:19.986554] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.015 [2024-06-11 12:21:19.987611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.987644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159dde0 (9): Bad file descriptor 00:27:07.015 [2024-06-11 12:21:19.987654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fb180 (9): Bad file descriptor 00:27:07.015 [2024-06-11 12:21:19.987663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575650 (9): Bad file descriptor 00:27:07.015 [2024-06-11 12:21:19.987672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15832b0 (9): Bad file descriptor 00:27:07.015 [2024-06-11 12:21:19.987683] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.987690] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.987697] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:07.015 [2024-06-11 12:21:19.987762] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:07.015 [2024-06-11 12:21:19.987774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:07.015 [2024-06-11 12:21:19.987783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.015 [2024-06-11 12:21:19.987791] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:07.015 [2024-06-11 12:21:19.987799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.987827] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.987834] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.987841] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:07.015 [2024-06-11 12:21:19.987851] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.987857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.987864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:07.015 [2024-06-11 12:21:19.987874] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.987881] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.987887] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:07.015 [2024-06-11 12:21:19.987897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:07.015 [2024-06-11 12:21:19.987904] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:07.015 [2024-06-11 12:21:19.987911] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:07.015 [2024-06-11 12:21:19.987970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.987979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.987985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.987991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.015 [2024-06-11 12:21:19.988342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.988510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.988521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1403630 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.988530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403630 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.988873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.989082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.989093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e49b0 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.989100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e49b0 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.989422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.989736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.989747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d72c0 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.989754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d72c0 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.990044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.990399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.015 [2024-06-11 12:21:19.990409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1577510 with addr=10.0.0.2, port=4420 00:27:07.015 [2024-06-11 12:21:19.990417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577510 is same with the state(5) to be set 00:27:07.015 [2024-06-11 12:21:19.990446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403630 (9): Bad file descriptor 00:27:07.015 [2024-06-11 12:21:19.990456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e49b0 (9): Bad file descriptor 00:27:07.016 [2024-06-11 12:21:19.990465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d72c0 (9): Bad file descriptor 00:27:07.016 [2024-06-11 12:21:19.990474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577510 (9): Bad file descriptor 00:27:07.016 [2024-06-11 12:21:19.990500] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:07.016 [2024-06-11 12:21:19.990507] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:07.016 [2024-06-11 12:21:19.990515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:07.016 [2024-06-11 12:21:19.990526] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:07.016 [2024-06-11 12:21:19.990532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:07.016 [2024-06-11 12:21:19.990539] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:07.016 [2024-06-11 12:21:19.990550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.016 [2024-06-11 12:21:19.990556] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.016 [2024-06-11 12:21:19.990563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.016 [2024-06-11 12:21:19.990573] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:07.016 [2024-06-11 12:21:19.990579] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:07.016 [2024-06-11 12:21:19.990586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:07.016 [2024-06-11 12:21:19.990613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.016 [2024-06-11 12:21:19.990620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.016 [2024-06-11 12:21:19.990626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.016 [2024-06-11 12:21:19.990632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.278 12:21:20 -- target/shutdown.sh@135 -- # nvmfpid= 00:27:07.278 12:21:20 -- target/shutdown.sh@138 -- # sleep 1 00:27:08.220 12:21:21 -- target/shutdown.sh@141 -- # kill -9 1609511 00:27:08.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1609511) - No such process 00:27:08.220 12:21:21 -- target/shutdown.sh@141 -- # true 00:27:08.220 12:21:21 -- target/shutdown.sh@143 -- # stoptarget 00:27:08.220 12:21:21 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:08.220 12:21:21 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:08.220 12:21:21 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:08.220 12:21:21 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:08.220 12:21:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:08.220 12:21:21 -- nvmf/common.sh@116 -- # sync 00:27:08.220 12:21:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:08.220 12:21:21 -- nvmf/common.sh@119 -- # set +e 00:27:08.220 12:21:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:08.220 12:21:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:08.220 rmmod nvme_tcp 00:27:08.220 rmmod nvme_fabrics 00:27:08.220 rmmod nvme_keyring 00:27:08.220 12:21:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:08.220 12:21:21 -- nvmf/common.sh@123 -- # set -e 00:27:08.220 12:21:21 -- nvmf/common.sh@124 -- # return 0 00:27:08.220 12:21:21 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:27:08.220 12:21:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:08.220 12:21:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:08.220 12:21:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:08.220 12:21:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.220 12:21:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:08.220 12:21:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.220 12:21:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.220 12:21:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.766 12:21:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:10.766 00:27:10.766 real 0m7.393s 00:27:10.766 user 0m17.365s 00:27:10.766 sys 0m1.185s 00:27:10.766 12:21:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.766 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:27:10.766 ************************************ 00:27:10.766 END TEST nvmf_shutdown_tc3 00:27:10.766 ************************************ 00:27:10.766 12:21:23 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:27:10.766 00:27:10.766 real 0m32.319s 00:27:10.766 user 1m16.208s 00:27:10.766 sys 0m9.280s 00:27:10.766 12:21:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.766 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:27:10.766 ************************************ 00:27:10.766 END TEST nvmf_shutdown 00:27:10.766 ************************************ 00:27:10.766 12:21:23 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:27:10.766 12:21:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:10.766 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:27:10.766 12:21:23 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:27:10.766 12:21:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:10.766 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:27:10.766 12:21:23 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:27:10.766 12:21:23 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:10.766 12:21:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:10.766 12:21:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.766 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:27:10.766 ************************************ 00:27:10.766 START TEST nvmf_multicontroller 00:27:10.766 ************************************ 00:27:10.766 12:21:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:10.766 * Looking for test storage... 00:27:10.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.766 12:21:23 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.766 12:21:23 -- nvmf/common.sh@7 -- # uname -s 00:27:10.766 12:21:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.766 12:21:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.766 12:21:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.767 12:21:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.767 12:21:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.767 12:21:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.767 12:21:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.767 12:21:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.767 12:21:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.767 12:21:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.767 12:21:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.767 12:21:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.767 12:21:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.767 12:21:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.767 12:21:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.767 12:21:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.767 12:21:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.767 12:21:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.767 12:21:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.767 12:21:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.767 12:21:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.767 12:21:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.767 12:21:23 -- paths/export.sh@5 -- # export PATH 00:27:10.767 12:21:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.767 12:21:23 -- nvmf/common.sh@46 -- # : 0 00:27:10.767 12:21:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:10.767 12:21:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:10.767 12:21:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:10.767 12:21:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.767 12:21:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.767 12:21:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:10.767 12:21:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:10.767 12:21:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:10.767 12:21:23 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.767 12:21:23 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.767 12:21:23 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:10.767 12:21:23 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:10.767 12:21:23 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:10.767 12:21:23 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:10.767 12:21:23 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:10.767 12:21:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:10.767 12:21:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.767 12:21:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:10.767 12:21:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:10.767 12:21:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:10.767 12:21:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.767 12:21:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.767 12:21:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.767 12:21:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:10.767 12:21:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:10.767 12:21:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:10.767 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:27:18.963 12:21:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:18.963 12:21:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:18.963 12:21:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:18.963 12:21:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:18.963 12:21:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:18.963 12:21:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:18.963 12:21:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:18.963 12:21:30 -- nvmf/common.sh@294 -- # net_devs=() 00:27:18.963 12:21:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:18.963 12:21:30 -- nvmf/common.sh@295 -- # e810=() 00:27:18.963 12:21:30 -- nvmf/common.sh@295 -- # local -ga e810 00:27:18.963 12:21:30 -- nvmf/common.sh@296 -- # x722=() 00:27:18.963 12:21:30 -- nvmf/common.sh@296 -- # local -ga x722 00:27:18.963 12:21:30 -- nvmf/common.sh@297 -- # mlx=() 00:27:18.963 12:21:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:18.963 12:21:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.963 12:21:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:18.963 12:21:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:18.963 12:21:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:18.963 12:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.963 12:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:18.963 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:18.963 12:21:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.963 12:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:18.963 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:18.963 12:21:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:18.963 12:21:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.963 12:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.963 12:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.963 12:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.963 12:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:18.963 Found net devices under 0000:31:00.0: cvl_0_0 00:27:18.963 12:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.963 12:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.963 12:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.963 12:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.963 12:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.963 12:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:18.963 Found net devices under 0000:31:00.1: cvl_0_1 00:27:18.963 12:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.963 12:21:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:18.963 12:21:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:18.963 12:21:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:18.963 12:21:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.963 12:21:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.963 12:21:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.963 12:21:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:18.963 12:21:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.963 12:21:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.963 12:21:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:18.963 12:21:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.963 12:21:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.963 12:21:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:18.963 12:21:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:18.963 12:21:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.963 12:21:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.963 12:21:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.963 12:21:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.963 12:21:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:18.963 12:21:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.963 12:21:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.963 12:21:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.963 12:21:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:18.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:27:18.963 00:27:18.963 --- 10.0.0.2 ping statistics --- 00:27:18.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.963 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:27:18.963 12:21:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:27:18.963 00:27:18.963 --- 10.0.0.1 ping statistics --- 00:27:18.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.963 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:27:18.963 12:21:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.963 12:21:30 -- nvmf/common.sh@410 -- # return 0 00:27:18.963 12:21:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:18.963 12:21:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.963 12:21:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:18.963 12:21:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.963 12:21:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:18.963 12:21:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:18.963 12:21:30 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:18.963 12:21:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:18.963 12:21:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:18.963 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.963 12:21:30 -- nvmf/common.sh@469 -- # nvmfpid=1614506 00:27:18.963 12:21:30 -- nvmf/common.sh@470 -- # waitforlisten 1614506 00:27:18.963 12:21:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:18.963 12:21:30 -- common/autotest_common.sh@819 -- # '[' -z 1614506 ']' 00:27:18.963 12:21:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.963 12:21:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:18.963 12:21:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.963 12:21:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:18.963 12:21:30 -- common/autotest_common.sh@10 -- # set +x 00:27:18.963 [2024-06-11 12:21:30.958662] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:18.963 [2024-06-11 12:21:30.958726] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.963 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.963 [2024-06-11 12:21:31.024243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:18.963 [2024-06-11 12:21:31.065130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:18.963 [2024-06-11 12:21:31.065247] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.963 [2024-06-11 12:21:31.065255] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.963 [2024-06-11 12:21:31.065261] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.963 [2024-06-11 12:21:31.065398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.963 [2024-06-11 12:21:31.065559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.963 [2024-06-11 12:21:31.065561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.963 12:21:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:18.963 12:21:31 -- common/autotest_common.sh@852 -- # return 0 00:27:18.963 12:21:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:18.963 12:21:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:18.963 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.963 12:21:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.963 12:21:31 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 [2024-06-11 12:21:31.826651] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 Malloc0 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 [2024-06-11 12:21:31.898890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 [2024-06-11 12:21:31.910835] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 Malloc1 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:18.964 12:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:18.964 12:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.964 12:21:31 -- host/multicontroller.sh@44 -- # bdevperf_pid=1614685 00:27:18.964 12:21:31 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:18.964 12:21:31 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:18.964 12:21:31 -- host/multicontroller.sh@47 -- # waitforlisten 1614685 /var/tmp/bdevperf.sock 00:27:18.964 12:21:31 -- common/autotest_common.sh@819 -- # '[' -z 1614685 ']' 00:27:18.964 12:21:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:18.964 12:21:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:18.964 12:21:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:18.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:18.964 12:21:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:18.964 12:21:31 -- common/autotest_common.sh@10 -- # set +x 00:27:19.905 12:21:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:19.905 12:21:32 -- common/autotest_common.sh@852 -- # return 0 00:27:19.905 12:21:32 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:19.905 12:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.905 12:21:32 -- common/autotest_common.sh@10 -- # set +x 00:27:20.166 NVMe0n1 00:27:20.166 12:21:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.166 12:21:33 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.166 12:21:33 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:20.166 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.166 12:21:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.166 1 00:27:20.166 12:21:33 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:20.166 12:21:33 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.166 12:21:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:20.166 12:21:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:20.166 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.166 request: 00:27:20.166 { 00:27:20.166 "name": "NVMe0", 00:27:20.166 "trtype": "tcp", 00:27:20.166 "traddr": "10.0.0.2", 00:27:20.166 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:20.166 "hostaddr": "10.0.0.2", 00:27:20.166 "hostsvcid": "60000", 00:27:20.166 "adrfam": "ipv4", 00:27:20.166 "trsvcid": "4420", 00:27:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.166 "method": "bdev_nvme_attach_controller", 00:27:20.166 "req_id": 1 00:27:20.166 } 00:27:20.166 Got JSON-RPC error response 00:27:20.166 response: 00:27:20.166 { 00:27:20.166 "code": -114, 00:27:20.166 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:20.166 } 00:27:20.166 12:21:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:20.166 12:21:33 -- common/autotest_common.sh@643 -- # es=1 00:27:20.166 12:21:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.166 12:21:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.166 12:21:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.166 12:21:33 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:20.166 12:21:33 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.166 12:21:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:20.166 12:21:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:20.166 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.166 request: 00:27:20.166 { 00:27:20.166 "name": "NVMe0", 00:27:20.166 "trtype": "tcp", 00:27:20.166 "traddr": "10.0.0.2", 00:27:20.166 "hostaddr": "10.0.0.2", 00:27:20.166 "hostsvcid": "60000", 00:27:20.166 "adrfam": "ipv4", 00:27:20.166 "trsvcid": "4420", 00:27:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:20.166 "method": "bdev_nvme_attach_controller", 00:27:20.166 "req_id": 1 00:27:20.166 } 00:27:20.166 Got JSON-RPC error response 00:27:20.166 response: 00:27:20.166 { 00:27:20.166 "code": -114, 00:27:20.166 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:20.166 } 00:27:20.166 12:21:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:20.166 12:21:33 -- common/autotest_common.sh@643 -- # es=1 00:27:20.166 12:21:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.166 12:21:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.166 12:21:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.166 12:21:33 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.166 12:21:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.166 request: 00:27:20.166 { 00:27:20.166 "name": "NVMe0", 00:27:20.166 "trtype": "tcp", 00:27:20.166 "traddr": "10.0.0.2", 00:27:20.166 "hostaddr": "10.0.0.2", 00:27:20.166 "hostsvcid": "60000", 00:27:20.166 "adrfam": "ipv4", 00:27:20.166 "trsvcid": "4420", 00:27:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.166 "multipath": "disable", 00:27:20.166 "method": "bdev_nvme_attach_controller", 00:27:20.166 "req_id": 1 00:27:20.166 } 00:27:20.166 Got JSON-RPC error response 00:27:20.166 response: 00:27:20.166 { 00:27:20.166 "code": -114, 00:27:20.166 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:20.166 } 00:27:20.166 12:21:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:20.166 12:21:33 -- common/autotest_common.sh@643 -- # es=1 00:27:20.166 12:21:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.166 12:21:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.166 12:21:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.166 12:21:33 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:20.166 12:21:33 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.166 12:21:33 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:20.166 12:21:33 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:20.166 12:21:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.166 12:21:33 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:20.166 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.166 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.166 request: 00:27:20.166 { 00:27:20.166 "name": "NVMe0", 00:27:20.166 "trtype": "tcp", 00:27:20.166 "traddr": "10.0.0.2", 00:27:20.166 "hostaddr": "10.0.0.2", 00:27:20.166 "hostsvcid": "60000", 00:27:20.166 "adrfam": "ipv4", 00:27:20.166 "trsvcid": "4420", 00:27:20.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.166 "multipath": "failover", 00:27:20.166 "method": "bdev_nvme_attach_controller", 00:27:20.166 "req_id": 1 00:27:20.167 } 00:27:20.167 Got JSON-RPC error response 00:27:20.167 response: 00:27:20.167 { 00:27:20.167 "code": -114, 00:27:20.167 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:20.167 } 00:27:20.167 12:21:33 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:20.167 12:21:33 -- common/autotest_common.sh@643 -- # es=1 00:27:20.167 12:21:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.167 12:21:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.167 12:21:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.167 12:21:33 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:20.167 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.167 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.427 00:27:20.427 12:21:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.427 12:21:33 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:20.427 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.427 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.427 12:21:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.427 12:21:33 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:20.427 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.427 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.427 00:27:20.427 12:21:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.427 12:21:33 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.427 12:21:33 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:20.427 12:21:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.427 12:21:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.427 12:21:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.687 12:21:33 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:20.687 12:21:33 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:21.627 0 00:27:21.627 12:21:34 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:21.627 12:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.627 12:21:34 -- common/autotest_common.sh@10 -- # set +x 00:27:21.627 12:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.628 12:21:34 -- host/multicontroller.sh@100 -- # killprocess 1614685 00:27:21.628 12:21:34 -- common/autotest_common.sh@926 -- # '[' -z 1614685 ']' 00:27:21.628 12:21:34 -- common/autotest_common.sh@930 -- # kill -0 1614685 00:27:21.628 12:21:34 -- common/autotest_common.sh@931 -- # uname 00:27:21.628 12:21:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:21.628 12:21:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1614685 00:27:21.628 12:21:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:21.628 12:21:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:21.628 12:21:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1614685' 00:27:21.628 killing process with pid 1614685 00:27:21.628 12:21:34 -- common/autotest_common.sh@945 -- # kill 1614685 00:27:21.628 12:21:34 -- common/autotest_common.sh@950 -- # wait 1614685 00:27:21.888 12:21:34 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.888 12:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.888 12:21:34 -- common/autotest_common.sh@10 -- # set +x 00:27:21.888 12:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.888 12:21:34 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:21.888 12:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.888 12:21:34 -- common/autotest_common.sh@10 -- # set +x 00:27:21.888 12:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.888 12:21:34 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:21.888 12:21:34 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:21.888 12:21:34 -- common/autotest_common.sh@1597 -- # read -r file 00:27:21.888 12:21:34 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:21.888 12:21:34 -- common/autotest_common.sh@1596 -- # sort -u 00:27:21.888 12:21:34 -- common/autotest_common.sh@1598 -- # cat 00:27:21.888 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:21.888 [2024-06-11 12:21:32.026961] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:21.888 [2024-06-11 12:21:32.027013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614685 ] 00:27:21.888 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.888 [2024-06-11 12:21:32.086565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.888 [2024-06-11 12:21:32.116060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.888 [2024-06-11 12:21:33.440209] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 4dd9162d-68b8-4fe3-924a-9ae05c980e62 already exists 00:27:21.888 [2024-06-11 12:21:33.440240] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:4dd9162d-68b8-4fe3-924a-9ae05c980e62 alias for bdev NVMe1n1 00:27:21.888 [2024-06-11 12:21:33.440250] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:21.888 Running I/O for 1 seconds... 00:27:21.888 00:27:21.888 Latency(us) 00:27:21.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.888 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:21.888 NVMe0n1 : 1.00 25219.50 98.51 0.00 0.00 5064.40 3768.32 15947.09 00:27:21.888 =================================================================================================================== 00:27:21.888 Total : 25219.50 98.51 0.00 0.00 5064.40 3768.32 15947.09 00:27:21.888 Received shutdown signal, test time was about 1.000000 seconds 00:27:21.888 00:27:21.888 Latency(us) 00:27:21.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.888 =================================================================================================================== 00:27:21.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.888 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:21.888 12:21:34 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:21.888 12:21:34 -- common/autotest_common.sh@1597 -- # read -r file 00:27:21.888 12:21:34 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:21.888 12:21:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:21.888 12:21:34 -- nvmf/common.sh@116 -- # sync 00:27:21.888 12:21:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:21.888 12:21:34 -- nvmf/common.sh@119 -- # set +e 00:27:21.888 12:21:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:21.888 12:21:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:21.888 rmmod nvme_tcp 00:27:21.888 rmmod nvme_fabrics 00:27:21.888 rmmod nvme_keyring 00:27:21.888 12:21:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:21.888 12:21:34 -- nvmf/common.sh@123 -- # set -e 00:27:21.888 12:21:34 -- nvmf/common.sh@124 -- # return 0 00:27:21.888 12:21:34 -- nvmf/common.sh@477 -- # '[' -n 1614506 ']' 00:27:21.888 12:21:34 -- nvmf/common.sh@478 -- # killprocess 1614506 00:27:21.888 12:21:34 -- common/autotest_common.sh@926 -- # '[' -z 1614506 ']' 00:27:21.888 12:21:34 -- common/autotest_common.sh@930 -- # kill -0 1614506 00:27:21.888 12:21:34 -- common/autotest_common.sh@931 -- # uname 00:27:21.888 12:21:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:21.888 12:21:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1614506 00:27:21.888 12:21:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:21.888 12:21:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:21.888 12:21:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1614506' 00:27:21.888 killing process with pid 1614506 00:27:21.888 12:21:34 -- common/autotest_common.sh@945 -- # kill 1614506 00:27:21.888 12:21:34 -- common/autotest_common.sh@950 -- # wait 1614506 00:27:22.148 12:21:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:22.148 12:21:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:22.148 12:21:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:22.148 12:21:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.148 12:21:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:22.148 12:21:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.148 12:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.148 12:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.689 12:21:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:24.689 00:27:24.689 real 0m13.665s 00:27:24.689 user 0m17.039s 00:27:24.689 sys 0m6.161s 00:27:24.689 12:21:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.689 12:21:37 -- common/autotest_common.sh@10 -- # set +x 00:27:24.689 ************************************ 00:27:24.689 END TEST nvmf_multicontroller 00:27:24.689 ************************************ 00:27:24.689 12:21:37 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:24.689 12:21:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:24.689 12:21:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.689 12:21:37 -- common/autotest_common.sh@10 -- # set +x 00:27:24.689 ************************************ 00:27:24.689 START TEST nvmf_aer 00:27:24.689 ************************************ 00:27:24.689 12:21:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:24.689 * Looking for test storage... 00:27:24.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.689 12:21:37 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.689 12:21:37 -- nvmf/common.sh@7 -- # uname -s 00:27:24.689 12:21:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.689 12:21:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.689 12:21:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.689 12:21:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.689 12:21:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.689 12:21:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.689 12:21:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.689 12:21:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.689 12:21:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.689 12:21:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.689 12:21:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.689 12:21:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.689 12:21:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.689 12:21:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.689 12:21:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.689 12:21:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.689 12:21:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.689 12:21:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.689 12:21:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.689 12:21:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.689 12:21:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.689 12:21:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.689 12:21:37 -- paths/export.sh@5 -- # export PATH 00:27:24.689 12:21:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.689 12:21:37 -- nvmf/common.sh@46 -- # : 0 00:27:24.689 12:21:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:24.689 12:21:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:24.689 12:21:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:24.689 12:21:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.689 12:21:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.689 12:21:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:24.689 12:21:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:24.689 12:21:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:24.689 12:21:37 -- host/aer.sh@11 -- # nvmftestinit 00:27:24.689 12:21:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:24.689 12:21:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.689 12:21:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:24.689 12:21:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:24.689 12:21:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:24.689 12:21:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.689 12:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.689 12:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.689 12:21:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:24.689 12:21:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:24.689 12:21:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:24.689 12:21:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.275 12:21:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:31.275 12:21:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:31.275 12:21:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:31.275 12:21:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:31.275 12:21:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:31.275 12:21:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:31.275 12:21:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:31.275 12:21:44 -- nvmf/common.sh@294 -- # net_devs=() 00:27:31.275 12:21:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:31.275 12:21:44 -- nvmf/common.sh@295 -- # e810=() 00:27:31.275 12:21:44 -- nvmf/common.sh@295 -- # local -ga e810 00:27:31.275 12:21:44 -- nvmf/common.sh@296 -- # x722=() 00:27:31.275 12:21:44 -- nvmf/common.sh@296 -- # local -ga x722 00:27:31.275 12:21:44 -- nvmf/common.sh@297 -- # mlx=() 00:27:31.275 12:21:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:31.275 12:21:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.275 12:21:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:31.275 12:21:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:31.275 12:21:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:31.276 12:21:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:31.276 12:21:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:31.276 12:21:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:31.276 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:31.276 12:21:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:31.276 12:21:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:31.276 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:31.276 12:21:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:31.276 12:21:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:31.276 12:21:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.276 12:21:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:31.276 12:21:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.276 12:21:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:31.276 Found net devices under 0000:31:00.0: cvl_0_0 00:27:31.276 12:21:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.276 12:21:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:31.276 12:21:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.276 12:21:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:31.276 12:21:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.276 12:21:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:31.276 Found net devices under 0000:31:00.1: cvl_0_1 00:27:31.276 12:21:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.276 12:21:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:31.276 12:21:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:31.276 12:21:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:31.276 12:21:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:31.276 12:21:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.276 12:21:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.276 12:21:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.276 12:21:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:31.276 12:21:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.276 12:21:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.276 12:21:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:31.276 12:21:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.276 12:21:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.276 12:21:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:31.276 12:21:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:31.276 12:21:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.276 12:21:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.537 12:21:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.537 12:21:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.537 12:21:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:31.537 12:21:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.537 12:21:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.537 12:21:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.537 12:21:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:31.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:27:31.537 00:27:31.537 --- 10.0.0.2 ping statistics --- 00:27:31.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.537 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:27:31.537 12:21:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:27:31.537 00:27:31.537 --- 10.0.0.1 ping statistics --- 00:27:31.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.537 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:31.537 12:21:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.537 12:21:44 -- nvmf/common.sh@410 -- # return 0 00:27:31.537 12:21:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:31.537 12:21:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.537 12:21:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:31.537 12:21:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:31.537 12:21:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.537 12:21:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:31.537 12:21:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:31.537 12:21:44 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:31.537 12:21:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:31.537 12:21:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:31.537 12:21:44 -- common/autotest_common.sh@10 -- # set +x 00:27:31.537 12:21:44 -- nvmf/common.sh@469 -- # nvmfpid=1619451 00:27:31.537 12:21:44 -- nvmf/common.sh@470 -- # waitforlisten 1619451 00:27:31.537 12:21:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:31.537 12:21:44 -- common/autotest_common.sh@819 -- # '[' -z 1619451 ']' 00:27:31.537 12:21:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.537 12:21:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:31.537 12:21:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.537 12:21:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:31.537 12:21:44 -- common/autotest_common.sh@10 -- # set +x 00:27:31.798 [2024-06-11 12:21:44.578687] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:31.798 [2024-06-11 12:21:44.578755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.798 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.798 [2024-06-11 12:21:44.650298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.798 [2024-06-11 12:21:44.688401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:31.798 [2024-06-11 12:21:44.688546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.798 [2024-06-11 12:21:44.688558] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.798 [2024-06-11 12:21:44.688567] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.798 [2024-06-11 12:21:44.688711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.798 [2024-06-11 12:21:44.688832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.798 [2024-06-11 12:21:44.688989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.798 [2024-06-11 12:21:44.688990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.369 12:21:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:32.369 12:21:45 -- common/autotest_common.sh@852 -- # return 0 00:27:32.369 12:21:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:32.369 12:21:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:32.369 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.369 12:21:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.369 12:21:45 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.369 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.369 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.369 [2024-06-11 12:21:45.398300] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.629 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.629 12:21:45 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:32.629 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.629 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 Malloc0 00:27:32.629 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.629 12:21:45 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:32.629 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.629 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.629 12:21:45 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.629 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.629 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.629 12:21:45 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.629 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.629 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 [2024-06-11 12:21:45.457650] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.629 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.629 12:21:45 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:32.629 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.629 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 [2024-06-11 12:21:45.469474] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:32.629 [ 00:27:32.629 { 00:27:32.629 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:32.629 "subtype": "Discovery", 00:27:32.629 "listen_addresses": [], 00:27:32.629 "allow_any_host": true, 00:27:32.629 "hosts": [] 00:27:32.629 }, 00:27:32.629 { 00:27:32.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.629 "subtype": "NVMe", 00:27:32.629 "listen_addresses": [ 00:27:32.629 { 00:27:32.629 "transport": "TCP", 00:27:32.629 "trtype": "TCP", 00:27:32.629 "adrfam": "IPv4", 00:27:32.629 "traddr": "10.0.0.2", 00:27:32.629 "trsvcid": "4420" 00:27:32.629 } 00:27:32.629 ], 00:27:32.629 "allow_any_host": true, 00:27:32.629 "hosts": [], 00:27:32.629 "serial_number": "SPDK00000000000001", 00:27:32.629 "model_number": "SPDK bdev Controller", 00:27:32.629 "max_namespaces": 2, 00:27:32.629 "min_cntlid": 1, 00:27:32.629 "max_cntlid": 65519, 00:27:32.629 "namespaces": [ 00:27:32.629 { 00:27:32.629 "nsid": 1, 00:27:32.629 "bdev_name": "Malloc0", 00:27:32.629 "name": "Malloc0", 00:27:32.629 "nguid": "74751E97544F488AA187E7637245A8A4", 00:27:32.629 "uuid": "74751e97-544f-488a-a187-e7637245a8a4" 00:27:32.629 } 00:27:32.629 ] 00:27:32.629 } 00:27:32.629 ] 00:27:32.629 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.629 12:21:45 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:32.629 12:21:45 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:32.629 12:21:45 -- host/aer.sh@33 -- # aerpid=1619788 00:27:32.629 12:21:45 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:32.629 12:21:45 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:32.629 12:21:45 -- common/autotest_common.sh@1244 -- # local i=0 00:27:32.629 12:21:45 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:32.629 12:21:45 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:27:32.629 12:21:45 -- common/autotest_common.sh@1247 -- # i=1 00:27:32.629 12:21:45 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:32.629 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.629 12:21:45 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:32.629 12:21:45 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:27:32.629 12:21:45 -- common/autotest_common.sh@1247 -- # i=2 00:27:32.629 12:21:45 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:32.890 12:21:45 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:32.890 12:21:45 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:32.890 12:21:45 -- common/autotest_common.sh@1255 -- # return 0 00:27:32.890 12:21:45 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:32.890 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.890 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.890 Malloc1 00:27:32.890 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.890 12:21:45 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:32.890 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.890 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.890 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.890 12:21:45 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:32.890 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.890 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.890 Asynchronous Event Request test 00:27:32.890 Attaching to 10.0.0.2 00:27:32.890 Attached to 10.0.0.2 00:27:32.890 Registering asynchronous event callbacks... 00:27:32.890 Starting namespace attribute notice tests for all controllers... 00:27:32.890 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:32.890 aer_cb - Changed Namespace 00:27:32.890 Cleaning up... 00:27:32.890 [ 00:27:32.890 { 00:27:32.890 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:32.890 "subtype": "Discovery", 00:27:32.890 "listen_addresses": [], 00:27:32.890 "allow_any_host": true, 00:27:32.890 "hosts": [] 00:27:32.890 }, 00:27:32.890 { 00:27:32.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.890 "subtype": "NVMe", 00:27:32.890 "listen_addresses": [ 00:27:32.890 { 00:27:32.890 "transport": "TCP", 00:27:32.890 "trtype": "TCP", 00:27:32.890 "adrfam": "IPv4", 00:27:32.890 "traddr": "10.0.0.2", 00:27:32.890 "trsvcid": "4420" 00:27:32.890 } 00:27:32.890 ], 00:27:32.890 "allow_any_host": true, 00:27:32.890 "hosts": [], 00:27:32.890 "serial_number": "SPDK00000000000001", 00:27:32.890 "model_number": "SPDK bdev Controller", 00:27:32.890 "max_namespaces": 2, 00:27:32.890 "min_cntlid": 1, 00:27:32.890 "max_cntlid": 65519, 00:27:32.890 "namespaces": [ 00:27:32.890 { 00:27:32.890 "nsid": 1, 00:27:32.890 "bdev_name": "Malloc0", 00:27:32.890 "name": "Malloc0", 00:27:32.890 "nguid": "74751E97544F488AA187E7637245A8A4", 00:27:32.890 "uuid": "74751e97-544f-488a-a187-e7637245a8a4" 00:27:32.890 }, 00:27:32.890 { 00:27:32.890 "nsid": 2, 00:27:32.890 "bdev_name": "Malloc1", 00:27:32.890 "name": "Malloc1", 00:27:32.890 "nguid": "8198CBFF174540639E5EFF07842958D1", 00:27:32.890 "uuid": "8198cbff-1745-4063-9e5e-ff07842958d1" 00:27:32.890 } 00:27:32.890 ] 00:27:32.890 } 00:27:32.890 ] 00:27:32.890 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.890 12:21:45 -- host/aer.sh@43 -- # wait 1619788 00:27:32.890 12:21:45 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:32.890 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.890 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.890 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.890 12:21:45 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:32.890 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.890 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.890 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.890 12:21:45 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:32.890 12:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.890 12:21:45 -- common/autotest_common.sh@10 -- # set +x 00:27:32.890 12:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.890 12:21:45 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:32.890 12:21:45 -- host/aer.sh@51 -- # nvmftestfini 00:27:32.890 12:21:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:32.890 12:21:45 -- nvmf/common.sh@116 -- # sync 00:27:32.890 12:21:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:32.890 12:21:45 -- nvmf/common.sh@119 -- # set +e 00:27:32.890 12:21:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:32.890 12:21:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:32.890 rmmod nvme_tcp 00:27:32.890 rmmod nvme_fabrics 00:27:32.890 rmmod nvme_keyring 00:27:32.890 12:21:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:32.890 12:21:45 -- nvmf/common.sh@123 -- # set -e 00:27:32.890 12:21:45 -- nvmf/common.sh@124 -- # return 0 00:27:32.890 12:21:45 -- nvmf/common.sh@477 -- # '[' -n 1619451 ']' 00:27:32.890 12:21:45 -- nvmf/common.sh@478 -- # killprocess 1619451 00:27:32.890 12:21:45 -- common/autotest_common.sh@926 -- # '[' -z 1619451 ']' 00:27:32.890 12:21:45 -- common/autotest_common.sh@930 -- # kill -0 1619451 00:27:32.890 12:21:45 -- common/autotest_common.sh@931 -- # uname 00:27:32.890 12:21:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:32.890 12:21:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1619451 00:27:33.151 12:21:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:33.151 12:21:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:33.151 12:21:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1619451' 00:27:33.151 killing process with pid 1619451 00:27:33.151 12:21:45 -- common/autotest_common.sh@945 -- # kill 1619451 00:27:33.151 [2024-06-11 12:21:45.935072] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:33.151 12:21:45 -- common/autotest_common.sh@950 -- # wait 1619451 00:27:33.151 12:21:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:33.151 12:21:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:33.151 12:21:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:33.151 12:21:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.151 12:21:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:33.151 12:21:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.151 12:21:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.152 12:21:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.695 12:21:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:35.695 00:27:35.695 real 0m10.961s 00:27:35.695 user 0m7.641s 00:27:35.695 sys 0m5.683s 00:27:35.695 12:21:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.695 12:21:48 -- common/autotest_common.sh@10 -- # set +x 00:27:35.695 ************************************ 00:27:35.695 END TEST nvmf_aer 00:27:35.695 ************************************ 00:27:35.695 12:21:48 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:35.695 12:21:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:35.695 12:21:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:35.695 12:21:48 -- common/autotest_common.sh@10 -- # set +x 00:27:35.695 ************************************ 00:27:35.695 START TEST nvmf_async_init 00:27:35.695 ************************************ 00:27:35.695 12:21:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:35.695 * Looking for test storage... 00:27:35.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.695 12:21:48 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.695 12:21:48 -- nvmf/common.sh@7 -- # uname -s 00:27:35.695 12:21:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.695 12:21:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.695 12:21:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.695 12:21:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.695 12:21:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.695 12:21:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.695 12:21:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.695 12:21:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.695 12:21:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.695 12:21:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.695 12:21:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:35.695 12:21:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:35.695 12:21:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.695 12:21:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.695 12:21:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.695 12:21:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.695 12:21:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.695 12:21:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.695 12:21:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.695 12:21:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.695 12:21:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.695 12:21:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.695 12:21:48 -- paths/export.sh@5 -- # export PATH 00:27:35.696 12:21:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.696 12:21:48 -- nvmf/common.sh@46 -- # : 0 00:27:35.696 12:21:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:35.696 12:21:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:35.696 12:21:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:35.696 12:21:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.696 12:21:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.696 12:21:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:35.696 12:21:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:35.696 12:21:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:35.696 12:21:48 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:35.696 12:21:48 -- host/async_init.sh@14 -- # null_block_size=512 00:27:35.696 12:21:48 -- host/async_init.sh@15 -- # null_bdev=null0 00:27:35.696 12:21:48 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:35.696 12:21:48 -- host/async_init.sh@20 -- # uuidgen 00:27:35.696 12:21:48 -- host/async_init.sh@20 -- # tr -d - 00:27:35.696 12:21:48 -- host/async_init.sh@20 -- # nguid=8dc2759dbf7c4979aff79b506a7b3a78 00:27:35.696 12:21:48 -- host/async_init.sh@22 -- # nvmftestinit 00:27:35.696 12:21:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:35.696 12:21:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.696 12:21:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:35.696 12:21:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:35.696 12:21:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:35.696 12:21:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.696 12:21:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.696 12:21:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.696 12:21:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:35.696 12:21:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:35.696 12:21:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:35.696 12:21:48 -- common/autotest_common.sh@10 -- # set +x 00:27:42.277 12:21:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:42.277 12:21:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:42.277 12:21:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:42.277 12:21:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:42.277 12:21:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:42.277 12:21:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:42.277 12:21:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:42.277 12:21:55 -- nvmf/common.sh@294 -- # net_devs=() 00:27:42.277 12:21:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:42.277 12:21:55 -- nvmf/common.sh@295 -- # e810=() 00:27:42.277 12:21:55 -- nvmf/common.sh@295 -- # local -ga e810 00:27:42.277 12:21:55 -- nvmf/common.sh@296 -- # x722=() 00:27:42.277 12:21:55 -- nvmf/common.sh@296 -- # local -ga x722 00:27:42.277 12:21:55 -- nvmf/common.sh@297 -- # mlx=() 00:27:42.277 12:21:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:42.277 12:21:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.277 12:21:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:42.277 12:21:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:42.277 12:21:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:42.277 12:21:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:42.277 12:21:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:42.277 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:42.277 12:21:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:42.277 12:21:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:42.277 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:42.277 12:21:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:42.277 12:21:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:42.277 12:21:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.277 12:21:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:42.277 12:21:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.277 12:21:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:42.277 Found net devices under 0000:31:00.0: cvl_0_0 00:27:42.277 12:21:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.277 12:21:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:42.277 12:21:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.277 12:21:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:42.277 12:21:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.277 12:21:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:42.277 Found net devices under 0000:31:00.1: cvl_0_1 00:27:42.277 12:21:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.277 12:21:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:42.277 12:21:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:42.277 12:21:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:42.277 12:21:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:42.277 12:21:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.277 12:21:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.277 12:21:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.277 12:21:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:42.277 12:21:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.278 12:21:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.278 12:21:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:42.278 12:21:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.278 12:21:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.278 12:21:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:42.278 12:21:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:42.278 12:21:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.278 12:21:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.539 12:21:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.539 12:21:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.539 12:21:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:42.539 12:21:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.539 12:21:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.539 12:21:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.539 12:21:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:42.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:27:42.539 00:27:42.539 --- 10.0.0.2 ping statistics --- 00:27:42.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.539 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:27:42.539 12:21:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:27:42.539 00:27:42.539 --- 10.0.0.1 ping statistics --- 00:27:42.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.539 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:27:42.539 12:21:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.539 12:21:55 -- nvmf/common.sh@410 -- # return 0 00:27:42.539 12:21:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:42.539 12:21:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.539 12:21:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:42.539 12:21:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:42.539 12:21:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.539 12:21:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:42.539 12:21:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:42.539 12:21:55 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:42.539 12:21:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:42.539 12:21:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:42.539 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:27:42.539 12:21:55 -- nvmf/common.sh@469 -- # nvmfpid=1623914 00:27:42.539 12:21:55 -- nvmf/common.sh@470 -- # waitforlisten 1623914 00:27:42.539 12:21:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:42.539 12:21:55 -- common/autotest_common.sh@819 -- # '[' -z 1623914 ']' 00:27:42.539 12:21:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.539 12:21:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:42.539 12:21:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.539 12:21:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:42.539 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:27:42.800 [2024-06-11 12:21:55.608422] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:42.800 [2024-06-11 12:21:55.608483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.800 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.800 [2024-06-11 12:21:55.676698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.800 [2024-06-11 12:21:55.706929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:42.800 [2024-06-11 12:21:55.707054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.800 [2024-06-11 12:21:55.707064] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.800 [2024-06-11 12:21:55.707072] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.800 [2024-06-11 12:21:55.707091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.372 12:21:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:43.372 12:21:56 -- common/autotest_common.sh@852 -- # return 0 00:27:43.372 12:21:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:43.372 12:21:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:43.372 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.633 12:21:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.633 12:21:56 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:43.633 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.633 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.633 [2024-06-11 12:21:56.423486] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.633 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.633 12:21:56 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:43.633 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.633 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.633 null0 00:27:43.633 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.633 12:21:56 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:43.633 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.633 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.633 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.633 12:21:56 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:43.633 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.633 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.633 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.633 12:21:56 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8dc2759dbf7c4979aff79b506a7b3a78 00:27:43.633 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.633 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.633 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.633 12:21:56 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:43.633 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.633 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.633 [2024-06-11 12:21:56.483762] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.633 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.633 12:21:56 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:43.633 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.633 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.894 nvme0n1 00:27:43.894 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.894 12:21:56 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:43.894 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.894 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.894 [ 00:27:43.894 { 00:27:43.894 "name": "nvme0n1", 00:27:43.894 "aliases": [ 00:27:43.894 "8dc2759d-bf7c-4979-aff7-9b506a7b3a78" 00:27:43.894 ], 00:27:43.894 "product_name": "NVMe disk", 00:27:43.894 "block_size": 512, 00:27:43.894 "num_blocks": 2097152, 00:27:43.894 "uuid": "8dc2759d-bf7c-4979-aff7-9b506a7b3a78", 00:27:43.894 "assigned_rate_limits": { 00:27:43.894 "rw_ios_per_sec": 0, 00:27:43.894 "rw_mbytes_per_sec": 0, 00:27:43.894 "r_mbytes_per_sec": 0, 00:27:43.894 "w_mbytes_per_sec": 0 00:27:43.894 }, 00:27:43.894 "claimed": false, 00:27:43.894 "zoned": false, 00:27:43.894 "supported_io_types": { 00:27:43.894 "read": true, 00:27:43.894 "write": true, 00:27:43.894 "unmap": false, 00:27:43.894 "write_zeroes": true, 00:27:43.894 "flush": true, 00:27:43.894 "reset": true, 00:27:43.894 "compare": true, 00:27:43.894 "compare_and_write": true, 00:27:43.894 "abort": true, 00:27:43.894 "nvme_admin": true, 00:27:43.894 "nvme_io": true 00:27:43.894 }, 00:27:43.894 "driver_specific": { 00:27:43.894 "nvme": [ 00:27:43.894 { 00:27:43.894 "trid": { 00:27:43.894 "trtype": "TCP", 00:27:43.894 "adrfam": "IPv4", 00:27:43.894 "traddr": "10.0.0.2", 00:27:43.894 "trsvcid": "4420", 00:27:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:43.894 }, 00:27:43.894 "ctrlr_data": { 00:27:43.894 "cntlid": 1, 00:27:43.894 "vendor_id": "0x8086", 00:27:43.894 "model_number": "SPDK bdev Controller", 00:27:43.894 "serial_number": "00000000000000000000", 00:27:43.894 "firmware_revision": "24.01.1", 00:27:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.894 "oacs": { 00:27:43.894 "security": 0, 00:27:43.894 "format": 0, 00:27:43.894 "firmware": 0, 00:27:43.894 "ns_manage": 0 00:27:43.894 }, 00:27:43.894 "multi_ctrlr": true, 00:27:43.894 "ana_reporting": false 00:27:43.894 }, 00:27:43.894 "vs": { 00:27:43.894 "nvme_version": "1.3" 00:27:43.894 }, 00:27:43.894 "ns_data": { 00:27:43.894 "id": 1, 00:27:43.894 "can_share": true 00:27:43.894 } 00:27:43.894 } 00:27:43.894 ], 00:27:43.894 "mp_policy": "active_passive" 00:27:43.894 } 00:27:43.894 } 00:27:43.894 ] 00:27:43.894 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.894 12:21:56 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:43.895 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.895 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.895 [2024-06-11 12:21:56.756468] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:43.895 [2024-06-11 12:21:56.756526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac26f0 (9): Bad file descriptor 00:27:43.895 [2024-06-11 12:21:56.900105] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:43.895 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.895 12:21:56 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:43.895 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.895 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:43.895 [ 00:27:43.895 { 00:27:43.895 "name": "nvme0n1", 00:27:43.895 "aliases": [ 00:27:43.895 "8dc2759d-bf7c-4979-aff7-9b506a7b3a78" 00:27:43.895 ], 00:27:43.895 "product_name": "NVMe disk", 00:27:43.895 "block_size": 512, 00:27:43.895 "num_blocks": 2097152, 00:27:43.895 "uuid": "8dc2759d-bf7c-4979-aff7-9b506a7b3a78", 00:27:43.895 "assigned_rate_limits": { 00:27:43.895 "rw_ios_per_sec": 0, 00:27:43.895 "rw_mbytes_per_sec": 0, 00:27:43.895 "r_mbytes_per_sec": 0, 00:27:43.895 "w_mbytes_per_sec": 0 00:27:43.895 }, 00:27:43.895 "claimed": false, 00:27:43.895 "zoned": false, 00:27:43.895 "supported_io_types": { 00:27:43.895 "read": true, 00:27:43.895 "write": true, 00:27:43.895 "unmap": false, 00:27:43.895 "write_zeroes": true, 00:27:43.895 "flush": true, 00:27:43.895 "reset": true, 00:27:43.895 "compare": true, 00:27:43.895 "compare_and_write": true, 00:27:43.895 "abort": true, 00:27:43.895 "nvme_admin": true, 00:27:43.895 "nvme_io": true 00:27:43.895 }, 00:27:43.895 "driver_specific": { 00:27:43.895 "nvme": [ 00:27:43.895 { 00:27:43.895 "trid": { 00:27:43.895 "trtype": "TCP", 00:27:43.895 "adrfam": "IPv4", 00:27:43.895 "traddr": "10.0.0.2", 00:27:43.895 "trsvcid": "4420", 00:27:43.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:43.895 }, 00:27:43.895 "ctrlr_data": { 00:27:43.895 "cntlid": 2, 00:27:43.895 "vendor_id": "0x8086", 00:27:43.895 "model_number": "SPDK bdev Controller", 00:27:43.895 "serial_number": "00000000000000000000", 00:27:43.895 "firmware_revision": "24.01.1", 00:27:43.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.895 "oacs": { 00:27:43.895 "security": 0, 00:27:43.895 "format": 0, 00:27:43.895 "firmware": 0, 00:27:43.895 "ns_manage": 0 00:27:43.895 }, 00:27:43.895 "multi_ctrlr": true, 00:27:43.895 "ana_reporting": false 00:27:43.895 }, 00:27:43.895 "vs": { 00:27:43.895 "nvme_version": "1.3" 00:27:43.895 }, 00:27:43.895 "ns_data": { 00:27:43.895 "id": 1, 00:27:43.895 "can_share": true 00:27:43.895 } 00:27:43.895 } 00:27:43.895 ], 00:27:43.895 "mp_policy": "active_passive" 00:27:43.895 } 00:27:43.895 } 00:27:43.895 ] 00:27:43.895 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.895 12:21:56 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.895 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.895 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:44.156 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.156 12:21:56 -- host/async_init.sh@53 -- # mktemp 00:27:44.156 12:21:56 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8Lus3FRpec 00:27:44.156 12:21:56 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:44.156 12:21:56 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8Lus3FRpec 00:27:44.156 12:21:56 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:44.156 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.156 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:44.156 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.156 12:21:56 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:44.156 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.156 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:44.156 [2024-06-11 12:21:56.969138] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:44.156 [2024-06-11 12:21:56.969240] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:44.156 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.156 12:21:56 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Lus3FRpec 00:27:44.156 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.156 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:44.156 12:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.156 12:21:56 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Lus3FRpec 00:27:44.156 12:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.156 12:21:56 -- common/autotest_common.sh@10 -- # set +x 00:27:44.156 [2024-06-11 12:21:56.993201] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:44.156 nvme0n1 00:27:44.156 12:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.156 12:21:57 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:44.156 12:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.156 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:27:44.156 [ 00:27:44.156 { 00:27:44.156 "name": "nvme0n1", 00:27:44.156 "aliases": [ 00:27:44.156 "8dc2759d-bf7c-4979-aff7-9b506a7b3a78" 00:27:44.156 ], 00:27:44.156 "product_name": "NVMe disk", 00:27:44.156 "block_size": 512, 00:27:44.156 "num_blocks": 2097152, 00:27:44.156 "uuid": "8dc2759d-bf7c-4979-aff7-9b506a7b3a78", 00:27:44.156 "assigned_rate_limits": { 00:27:44.156 "rw_ios_per_sec": 0, 00:27:44.156 "rw_mbytes_per_sec": 0, 00:27:44.156 "r_mbytes_per_sec": 0, 00:27:44.156 "w_mbytes_per_sec": 0 00:27:44.156 }, 00:27:44.156 "claimed": false, 00:27:44.156 "zoned": false, 00:27:44.156 "supported_io_types": { 00:27:44.156 "read": true, 00:27:44.156 "write": true, 00:27:44.156 "unmap": false, 00:27:44.156 "write_zeroes": true, 00:27:44.156 "flush": true, 00:27:44.156 "reset": true, 00:27:44.156 "compare": true, 00:27:44.156 "compare_and_write": true, 00:27:44.156 "abort": true, 00:27:44.156 "nvme_admin": true, 00:27:44.156 "nvme_io": true 00:27:44.156 }, 00:27:44.156 "driver_specific": { 00:27:44.156 "nvme": [ 00:27:44.156 { 00:27:44.156 "trid": { 00:27:44.156 "trtype": "TCP", 00:27:44.156 "adrfam": "IPv4", 00:27:44.156 "traddr": "10.0.0.2", 00:27:44.156 "trsvcid": "4421", 00:27:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:44.156 }, 00:27:44.156 "ctrlr_data": { 00:27:44.156 "cntlid": 3, 00:27:44.156 "vendor_id": "0x8086", 00:27:44.156 "model_number": "SPDK bdev Controller", 00:27:44.156 "serial_number": "00000000000000000000", 00:27:44.156 "firmware_revision": "24.01.1", 00:27:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.156 "oacs": { 00:27:44.156 "security": 0, 00:27:44.156 "format": 0, 00:27:44.156 "firmware": 0, 00:27:44.156 "ns_manage": 0 00:27:44.156 }, 00:27:44.156 "multi_ctrlr": true, 00:27:44.156 "ana_reporting": false 00:27:44.156 }, 00:27:44.156 "vs": { 00:27:44.156 "nvme_version": "1.3" 00:27:44.156 }, 00:27:44.156 "ns_data": { 00:27:44.156 "id": 1, 00:27:44.156 "can_share": true 00:27:44.156 } 00:27:44.156 } 00:27:44.156 ], 00:27:44.156 "mp_policy": "active_passive" 00:27:44.156 } 00:27:44.156 } 00:27:44.156 ] 00:27:44.156 12:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.156 12:21:57 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.156 12:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.156 12:21:57 -- common/autotest_common.sh@10 -- # set +x 00:27:44.156 12:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.156 12:21:57 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.8Lus3FRpec 00:27:44.156 12:21:57 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:44.157 12:21:57 -- host/async_init.sh@78 -- # nvmftestfini 00:27:44.157 12:21:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:44.157 12:21:57 -- nvmf/common.sh@116 -- # sync 00:27:44.157 12:21:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:44.157 12:21:57 -- nvmf/common.sh@119 -- # set +e 00:27:44.157 12:21:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:44.157 12:21:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:44.157 rmmod nvme_tcp 00:27:44.157 rmmod nvme_fabrics 00:27:44.157 rmmod nvme_keyring 00:27:44.157 12:21:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:44.157 12:21:57 -- nvmf/common.sh@123 -- # set -e 00:27:44.157 12:21:57 -- nvmf/common.sh@124 -- # return 0 00:27:44.157 12:21:57 -- nvmf/common.sh@477 -- # '[' -n 1623914 ']' 00:27:44.157 12:21:57 -- nvmf/common.sh@478 -- # killprocess 1623914 00:27:44.157 12:21:57 -- common/autotest_common.sh@926 -- # '[' -z 1623914 ']' 00:27:44.157 12:21:57 -- common/autotest_common.sh@930 -- # kill -0 1623914 00:27:44.157 12:21:57 -- common/autotest_common.sh@931 -- # uname 00:27:44.157 12:21:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:44.157 12:21:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1623914 00:27:44.417 12:21:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:44.417 12:21:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:44.417 12:21:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1623914' 00:27:44.417 killing process with pid 1623914 00:27:44.417 12:21:57 -- common/autotest_common.sh@945 -- # kill 1623914 00:27:44.417 12:21:57 -- common/autotest_common.sh@950 -- # wait 1623914 00:27:44.417 12:21:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:44.417 12:21:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:44.417 12:21:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:44.417 12:21:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.417 12:21:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:44.417 12:21:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.417 12:21:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.417 12:21:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.964 12:21:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:46.964 00:27:46.964 real 0m11.203s 00:27:46.964 user 0m3.991s 00:27:46.964 sys 0m5.641s 00:27:46.964 12:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.964 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:46.964 ************************************ 00:27:46.964 END TEST nvmf_async_init 00:27:46.964 ************************************ 00:27:46.964 12:21:59 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:46.964 12:21:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:46.964 12:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:46.964 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:46.964 ************************************ 00:27:46.964 START TEST dma 00:27:46.964 ************************************ 00:27:46.964 12:21:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:46.964 * Looking for test storage... 00:27:46.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.964 12:21:59 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.964 12:21:59 -- nvmf/common.sh@7 -- # uname -s 00:27:46.964 12:21:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.964 12:21:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.964 12:21:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.964 12:21:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.964 12:21:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.964 12:21:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.964 12:21:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.964 12:21:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.964 12:21:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.964 12:21:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.964 12:21:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:46.964 12:21:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:46.964 12:21:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.964 12:21:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.964 12:21:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.964 12:21:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.964 12:21:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.964 12:21:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.964 12:21:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.964 12:21:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.964 12:21:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.964 12:21:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.964 12:21:59 -- paths/export.sh@5 -- # export PATH 00:27:46.964 12:21:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.964 12:21:59 -- nvmf/common.sh@46 -- # : 0 00:27:46.964 12:21:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:46.964 12:21:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:46.964 12:21:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:46.964 12:21:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.964 12:21:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.964 12:21:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:46.964 12:21:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:46.964 12:21:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:46.964 12:21:59 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:46.964 12:21:59 -- host/dma.sh@13 -- # exit 0 00:27:46.964 00:27:46.964 real 0m0.119s 00:27:46.964 user 0m0.059s 00:27:46.964 sys 0m0.068s 00:27:46.964 12:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.964 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:46.964 ************************************ 00:27:46.964 END TEST dma 00:27:46.964 ************************************ 00:27:46.964 12:21:59 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:46.964 12:21:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:46.964 12:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:46.964 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:46.964 ************************************ 00:27:46.964 START TEST nvmf_identify 00:27:46.964 ************************************ 00:27:46.964 12:21:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:46.964 * Looking for test storage... 00:27:46.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.964 12:21:59 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.964 12:21:59 -- nvmf/common.sh@7 -- # uname -s 00:27:46.964 12:21:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.964 12:21:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.964 12:21:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.964 12:21:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.964 12:21:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.964 12:21:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.964 12:21:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.964 12:21:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.964 12:21:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.964 12:21:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.964 12:21:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:46.964 12:21:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:46.964 12:21:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.964 12:21:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.964 12:21:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.964 12:21:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.965 12:21:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.965 12:21:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.965 12:21:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.965 12:21:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.965 12:21:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.965 12:21:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.965 12:21:59 -- paths/export.sh@5 -- # export PATH 00:27:46.965 12:21:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.965 12:21:59 -- nvmf/common.sh@46 -- # : 0 00:27:46.965 12:21:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:46.965 12:21:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:46.965 12:21:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:46.965 12:21:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.965 12:21:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.965 12:21:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:46.965 12:21:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:46.965 12:21:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:46.965 12:21:59 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:46.965 12:21:59 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:46.965 12:21:59 -- host/identify.sh@14 -- # nvmftestinit 00:27:46.965 12:21:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:46.965 12:21:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.965 12:21:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:46.965 12:21:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:46.965 12:21:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:46.965 12:21:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.965 12:21:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.965 12:21:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.965 12:21:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:46.965 12:21:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:46.965 12:21:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:46.965 12:21:59 -- common/autotest_common.sh@10 -- # set +x 00:27:55.108 12:22:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:55.108 12:22:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:55.108 12:22:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:55.108 12:22:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:55.108 12:22:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:55.108 12:22:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:55.108 12:22:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:55.108 12:22:06 -- nvmf/common.sh@294 -- # net_devs=() 00:27:55.108 12:22:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:55.108 12:22:06 -- nvmf/common.sh@295 -- # e810=() 00:27:55.108 12:22:06 -- nvmf/common.sh@295 -- # local -ga e810 00:27:55.108 12:22:06 -- nvmf/common.sh@296 -- # x722=() 00:27:55.108 12:22:06 -- nvmf/common.sh@296 -- # local -ga x722 00:27:55.108 12:22:06 -- nvmf/common.sh@297 -- # mlx=() 00:27:55.108 12:22:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:55.108 12:22:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.108 12:22:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:55.108 12:22:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:55.108 12:22:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:55.108 12:22:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:55.108 12:22:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:55.108 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:55.108 12:22:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:55.108 12:22:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:55.108 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:55.108 12:22:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:55.108 12:22:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:55.108 12:22:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:55.108 12:22:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.108 12:22:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:55.108 12:22:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.108 12:22:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:55.109 Found net devices under 0000:31:00.0: cvl_0_0 00:27:55.109 12:22:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.109 12:22:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:55.109 12:22:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.109 12:22:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:55.109 12:22:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.109 12:22:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:55.109 Found net devices under 0000:31:00.1: cvl_0_1 00:27:55.109 12:22:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.109 12:22:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:55.109 12:22:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:55.109 12:22:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:55.109 12:22:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:55.109 12:22:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:55.109 12:22:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.109 12:22:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.109 12:22:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.109 12:22:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:55.109 12:22:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.109 12:22:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.109 12:22:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:55.109 12:22:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.109 12:22:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.109 12:22:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:55.109 12:22:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:55.109 12:22:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.109 12:22:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.109 12:22:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.109 12:22:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.109 12:22:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:55.109 12:22:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.109 12:22:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.109 12:22:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.109 12:22:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:55.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:27:55.109 00:27:55.109 --- 10.0.0.2 ping statistics --- 00:27:55.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.109 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:27:55.109 12:22:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:27:55.109 00:27:55.109 --- 10.0.0.1 ping statistics --- 00:27:55.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.109 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:27:55.109 12:22:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.109 12:22:07 -- nvmf/common.sh@410 -- # return 0 00:27:55.109 12:22:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:55.109 12:22:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.109 12:22:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:55.109 12:22:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:55.109 12:22:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.109 12:22:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:55.109 12:22:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:55.109 12:22:07 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:55.109 12:22:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:55.109 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 12:22:07 -- host/identify.sh@19 -- # nvmfpid=1628681 00:27:55.109 12:22:07 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.109 12:22:07 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:55.109 12:22:07 -- host/identify.sh@23 -- # waitforlisten 1628681 00:27:55.109 12:22:07 -- common/autotest_common.sh@819 -- # '[' -z 1628681 ']' 00:27:55.109 12:22:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.109 12:22:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:55.109 12:22:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.109 12:22:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:55.109 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 [2024-06-11 12:22:07.129589] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:55.109 [2024-06-11 12:22:07.129655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.109 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.109 [2024-06-11 12:22:07.202120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.109 [2024-06-11 12:22:07.242101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:55.109 [2024-06-11 12:22:07.242252] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.109 [2024-06-11 12:22:07.242265] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.109 [2024-06-11 12:22:07.242273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.109 [2024-06-11 12:22:07.242438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.109 [2024-06-11 12:22:07.242558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.109 [2024-06-11 12:22:07.242718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.109 [2024-06-11 12:22:07.242719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.109 12:22:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:55.109 12:22:07 -- common/autotest_common.sh@852 -- # return 0 00:27:55.109 12:22:07 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:55.109 12:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.109 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 [2024-06-11 12:22:07.914225] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.109 12:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.109 12:22:07 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:55.109 12:22:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:55.109 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 12:22:07 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:55.109 12:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.109 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 Malloc0 00:27:55.109 12:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.109 12:22:07 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.109 12:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.109 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 12:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.109 12:22:07 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:55.109 12:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.109 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 12:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.109 12:22:08 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.109 12:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.109 12:22:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 [2024-06-11 12:22:08.013627] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.109 12:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.109 12:22:08 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.109 12:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.109 12:22:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 12:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.109 12:22:08 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:55.109 12:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.109 12:22:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.109 [2024-06-11 12:22:08.037477] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:55.109 [ 00:27:55.109 { 00:27:55.109 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:55.109 "subtype": "Discovery", 00:27:55.109 "listen_addresses": [ 00:27:55.109 { 00:27:55.109 "transport": "TCP", 00:27:55.109 "trtype": "TCP", 00:27:55.109 "adrfam": "IPv4", 00:27:55.109 "traddr": "10.0.0.2", 00:27:55.109 "trsvcid": "4420" 00:27:55.109 } 00:27:55.109 ], 00:27:55.109 "allow_any_host": true, 00:27:55.109 "hosts": [] 00:27:55.109 }, 00:27:55.109 { 00:27:55.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.109 "subtype": "NVMe", 00:27:55.109 "listen_addresses": [ 00:27:55.109 { 00:27:55.109 "transport": "TCP", 00:27:55.109 "trtype": "TCP", 00:27:55.109 "adrfam": "IPv4", 00:27:55.109 "traddr": "10.0.0.2", 00:27:55.109 "trsvcid": "4420" 00:27:55.109 } 00:27:55.109 ], 00:27:55.109 "allow_any_host": true, 00:27:55.109 "hosts": [], 00:27:55.109 "serial_number": "SPDK00000000000001", 00:27:55.109 "model_number": "SPDK bdev Controller", 00:27:55.109 "max_namespaces": 32, 00:27:55.109 "min_cntlid": 1, 00:27:55.109 "max_cntlid": 65519, 00:27:55.109 "namespaces": [ 00:27:55.109 { 00:27:55.109 "nsid": 1, 00:27:55.109 "bdev_name": "Malloc0", 00:27:55.109 "name": "Malloc0", 00:27:55.109 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:55.109 "eui64": "ABCDEF0123456789", 00:27:55.109 "uuid": "2d3b7ee4-bb5a-4989-b8f6-170e710eebaf" 00:27:55.109 } 00:27:55.109 ] 00:27:55.109 } 00:27:55.109 ] 00:27:55.109 12:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.110 12:22:08 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:55.110 [2024-06-11 12:22:08.071660] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:55.110 [2024-06-11 12:22:08.071702] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628723 ] 00:27:55.110 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.110 [2024-06-11 12:22:08.102665] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:55.110 [2024-06-11 12:22:08.102712] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:55.110 [2024-06-11 12:22:08.102717] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:55.110 [2024-06-11 12:22:08.102728] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:55.110 [2024-06-11 12:22:08.102736] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:55.110 [2024-06-11 12:22:08.106045] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:55.110 [2024-06-11 12:22:08.106079] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9b3fd0 0 00:27:55.110 [2024-06-11 12:22:08.106327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:55.110 [2024-06-11 12:22:08.106335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:55.110 [2024-06-11 12:22:08.106341] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:55.110 [2024-06-11 12:22:08.106344] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:55.110 [2024-06-11 12:22:08.106376] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.106382] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.106387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.110 [2024-06-11 12:22:08.106400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:55.110 [2024-06-11 12:22:08.106413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.110 [2024-06-11 12:22:08.114029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.110 [2024-06-11 12:22:08.114039] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.110 [2024-06-11 12:22:08.114043] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114047] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.110 [2024-06-11 12:22:08.114059] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:55.110 [2024-06-11 12:22:08.114066] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:55.110 [2024-06-11 12:22:08.114071] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:55.110 [2024-06-11 12:22:08.114086] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114094] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.110 [2024-06-11 12:22:08.114102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.110 [2024-06-11 12:22:08.114114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.110 [2024-06-11 12:22:08.114296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.110 [2024-06-11 12:22:08.114303] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.110 [2024-06-11 12:22:08.114307] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114310] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.110 [2024-06-11 12:22:08.114318] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:55.110 [2024-06-11 12:22:08.114326] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:55.110 [2024-06-11 12:22:08.114333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114340] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.110 [2024-06-11 12:22:08.114347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.110 [2024-06-11 12:22:08.114357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.110 [2024-06-11 12:22:08.114513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.110 [2024-06-11 12:22:08.114520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.110 [2024-06-11 12:22:08.114523] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.110 [2024-06-11 12:22:08.114535] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:55.110 [2024-06-11 12:22:08.114543] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:55.110 [2024-06-11 12:22:08.114549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.110 [2024-06-11 12:22:08.114563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.110 [2024-06-11 12:22:08.114574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.110 [2024-06-11 12:22:08.114736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.110 [2024-06-11 12:22:08.114743] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.110 [2024-06-11 12:22:08.114746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.110 [2024-06-11 12:22:08.114755] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:55.110 [2024-06-11 12:22:08.114764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.110 [2024-06-11 12:22:08.114778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.110 [2024-06-11 12:22:08.114788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.110 [2024-06-11 12:22:08.114956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.110 [2024-06-11 12:22:08.114962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.110 [2024-06-11 12:22:08.114966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.114970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.110 [2024-06-11 12:22:08.114974] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:55.110 [2024-06-11 12:22:08.114980] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:55.110 [2024-06-11 12:22:08.114987] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:55.110 [2024-06-11 12:22:08.115092] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:55.110 [2024-06-11 12:22:08.115098] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:55.110 [2024-06-11 12:22:08.115106] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.115110] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.115114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.110 [2024-06-11 12:22:08.115121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.110 [2024-06-11 12:22:08.115131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.110 [2024-06-11 12:22:08.115292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.110 [2024-06-11 12:22:08.115299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.110 [2024-06-11 12:22:08.115303] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.115307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.110 [2024-06-11 12:22:08.115312] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:55.110 [2024-06-11 12:22:08.115321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.115325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.115328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.110 [2024-06-11 12:22:08.115335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.110 [2024-06-11 12:22:08.115345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.110 [2024-06-11 12:22:08.115511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.110 [2024-06-11 12:22:08.115518] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.110 [2024-06-11 12:22:08.115521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.115525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.110 [2024-06-11 12:22:08.115529] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:55.110 [2024-06-11 12:22:08.115534] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:55.110 [2024-06-11 12:22:08.115541] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:55.110 [2024-06-11 12:22:08.115555] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:55.110 [2024-06-11 12:22:08.115562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.110 [2024-06-11 12:22:08.115566] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.111 [2024-06-11 12:22:08.115569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.111 [2024-06-11 12:22:08.115576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.111 [2024-06-11 12:22:08.115586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.111 [2024-06-11 12:22:08.115811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.111 [2024-06-11 12:22:08.115819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.111 [2024-06-11 12:22:08.115823] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.111 [2024-06-11 12:22:08.115827] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9b3fd0): datao=0, datal=4096, cccid=0 00:27:55.111 [2024-06-11 12:22:08.115832] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21180) on tqpair(0x9b3fd0): expected_datao=0, payload_size=4096 00:27:55.111 [2024-06-11 12:22:08.115847] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.111 [2024-06-11 12:22:08.115852] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157025] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.391 [2024-06-11 12:22:08.157036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.391 [2024-06-11 12:22:08.157039] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157043] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.391 [2024-06-11 12:22:08.157054] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:55.391 [2024-06-11 12:22:08.157061] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:55.391 [2024-06-11 12:22:08.157066] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:55.391 [2024-06-11 12:22:08.157071] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:55.391 [2024-06-11 12:22:08.157075] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:55.391 [2024-06-11 12:22:08.157080] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:55.391 [2024-06-11 12:22:08.157088] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:55.391 [2024-06-11 12:22:08.157095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.391 [2024-06-11 12:22:08.157110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:55.391 [2024-06-11 12:22:08.157121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.391 [2024-06-11 12:22:08.157283] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.391 [2024-06-11 12:22:08.157290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.391 [2024-06-11 12:22:08.157295] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21180) on tqpair=0x9b3fd0 00:27:55.391 [2024-06-11 12:22:08.157308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9b3fd0) 00:27:55.391 [2024-06-11 12:22:08.157325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.391 [2024-06-11 12:22:08.157332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157340] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9b3fd0) 00:27:55.391 [2024-06-11 12:22:08.157347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.391 [2024-06-11 12:22:08.157355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.391 [2024-06-11 12:22:08.157365] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.157373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.392 [2024-06-11 12:22:08.157379] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.157394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.392 [2024-06-11 12:22:08.157399] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:55.392 [2024-06-11 12:22:08.157412] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:55.392 [2024-06-11 12:22:08.157419] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157423] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157427] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.157434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.392 [2024-06-11 12:22:08.157445] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21180, cid 0, qid 0 00:27:55.392 [2024-06-11 12:22:08.157450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa212e0, cid 1, qid 0 00:27:55.392 [2024-06-11 12:22:08.157455] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21440, cid 2, qid 0 00:27:55.392 [2024-06-11 12:22:08.157460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.392 [2024-06-11 12:22:08.157464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:27:55.392 [2024-06-11 12:22:08.157731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.392 [2024-06-11 12:22:08.157738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.392 [2024-06-11 12:22:08.157741] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9b3fd0 00:27:55.392 [2024-06-11 12:22:08.157750] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:55.392 [2024-06-11 12:22:08.157755] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:55.392 [2024-06-11 12:22:08.157764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.157778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.392 [2024-06-11 12:22:08.157788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:27:55.392 [2024-06-11 12:22:08.157953] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.392 [2024-06-11 12:22:08.157961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.392 [2024-06-11 12:22:08.157964] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157968] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9b3fd0): datao=0, datal=4096, cccid=4 00:27:55.392 [2024-06-11 12:22:08.157972] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9b3fd0): expected_datao=0, payload_size=4096 00:27:55.392 [2024-06-11 12:22:08.157979] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.157984] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.392 [2024-06-11 12:22:08.158191] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.392 [2024-06-11 12:22:08.158195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158200] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9b3fd0 00:27:55.392 [2024-06-11 12:22:08.158210] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:55.392 [2024-06-11 12:22:08.158233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158242] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.158248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.392 [2024-06-11 12:22:08.158255] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.158268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.392 [2024-06-11 12:22:08.158283] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:27:55.392 [2024-06-11 12:22:08.158288] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21860, cid 5, qid 0 00:27:55.392 [2024-06-11 12:22:08.158557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.392 [2024-06-11 12:22:08.158564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.392 [2024-06-11 12:22:08.158568] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158571] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9b3fd0): datao=0, datal=1024, cccid=4 00:27:55.392 [2024-06-11 12:22:08.158575] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9b3fd0): expected_datao=0, payload_size=1024 00:27:55.392 [2024-06-11 12:22:08.158583] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158586] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158592] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.392 [2024-06-11 12:22:08.158597] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.392 [2024-06-11 12:22:08.158601] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.158604] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21860) on tqpair=0x9b3fd0 00:27:55.392 [2024-06-11 12:22:08.200210] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.392 [2024-06-11 12:22:08.200221] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.392 [2024-06-11 12:22:08.200224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9b3fd0 00:27:55.392 [2024-06-11 12:22:08.200237] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200245] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.200251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.392 [2024-06-11 12:22:08.200266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:27:55.392 [2024-06-11 12:22:08.200428] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.392 [2024-06-11 12:22:08.200435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.392 [2024-06-11 12:22:08.200438] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200442] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9b3fd0): datao=0, datal=3072, cccid=4 00:27:55.392 [2024-06-11 12:22:08.200446] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9b3fd0): expected_datao=0, payload_size=3072 00:27:55.392 [2024-06-11 12:22:08.200491] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200499] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.392 [2024-06-11 12:22:08.200671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.392 [2024-06-11 12:22:08.200674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9b3fd0 00:27:55.392 [2024-06-11 12:22:08.200686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200689] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200693] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9b3fd0) 00:27:55.392 [2024-06-11 12:22:08.200699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.392 [2024-06-11 12:22:08.200712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa21700, cid 4, qid 0 00:27:55.392 [2024-06-11 12:22:08.200948] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.392 [2024-06-11 12:22:08.200954] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.392 [2024-06-11 12:22:08.200958] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200961] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9b3fd0): datao=0, datal=8, cccid=4 00:27:55.392 [2024-06-11 12:22:08.200966] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa21700) on tqpair(0x9b3fd0): expected_datao=0, payload_size=8 00:27:55.392 [2024-06-11 12:22:08.200973] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.200976] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.392 [2024-06-11 12:22:08.245026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.393 [2024-06-11 12:22:08.245035] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.393 [2024-06-11 12:22:08.245038] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.393 [2024-06-11 12:22:08.245042] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa21700) on tqpair=0x9b3fd0 00:27:55.393 ===================================================== 00:27:55.393 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:55.393 ===================================================== 00:27:55.393 Controller Capabilities/Features 00:27:55.393 ================================ 00:27:55.393 Vendor ID: 0000 00:27:55.393 Subsystem Vendor ID: 0000 00:27:55.393 Serial Number: .................... 00:27:55.393 Model Number: ........................................ 00:27:55.393 Firmware Version: 24.01.1 00:27:55.393 Recommended Arb Burst: 0 00:27:55.393 IEEE OUI Identifier: 00 00 00 00:27:55.393 Multi-path I/O 00:27:55.393 May have multiple subsystem ports: No 00:27:55.393 May have multiple controllers: No 00:27:55.393 Associated with SR-IOV VF: No 00:27:55.393 Max Data Transfer Size: 131072 00:27:55.393 Max Number of Namespaces: 0 00:27:55.393 Max Number of I/O Queues: 1024 00:27:55.393 NVMe Specification Version (VS): 1.3 00:27:55.393 NVMe Specification Version (Identify): 1.3 00:27:55.393 Maximum Queue Entries: 128 00:27:55.393 Contiguous Queues Required: Yes 00:27:55.393 Arbitration Mechanisms Supported 00:27:55.393 Weighted Round Robin: Not Supported 00:27:55.393 Vendor Specific: Not Supported 00:27:55.393 Reset Timeout: 15000 ms 00:27:55.393 Doorbell Stride: 4 bytes 00:27:55.393 NVM Subsystem Reset: Not Supported 00:27:55.393 Command Sets Supported 00:27:55.393 NVM Command Set: Supported 00:27:55.393 Boot Partition: Not Supported 00:27:55.393 Memory Page Size Minimum: 4096 bytes 00:27:55.393 Memory Page Size Maximum: 4096 bytes 00:27:55.393 Persistent Memory Region: Not Supported 00:27:55.393 Optional Asynchronous Events Supported 00:27:55.393 Namespace Attribute Notices: Not Supported 00:27:55.393 Firmware Activation Notices: Not Supported 00:27:55.393 ANA Change Notices: Not Supported 00:27:55.393 PLE Aggregate Log Change Notices: Not Supported 00:27:55.393 LBA Status Info Alert Notices: Not Supported 00:27:55.393 EGE Aggregate Log Change Notices: Not Supported 00:27:55.393 Normal NVM Subsystem Shutdown event: Not Supported 00:27:55.393 Zone Descriptor Change Notices: Not Supported 00:27:55.393 Discovery Log Change Notices: Supported 00:27:55.393 Controller Attributes 00:27:55.393 128-bit Host Identifier: Not Supported 00:27:55.393 Non-Operational Permissive Mode: Not Supported 00:27:55.393 NVM Sets: Not Supported 00:27:55.393 Read Recovery Levels: Not Supported 00:27:55.393 Endurance Groups: Not Supported 00:27:55.393 Predictable Latency Mode: Not Supported 00:27:55.393 Traffic Based Keep ALive: Not Supported 00:27:55.393 Namespace Granularity: Not Supported 00:27:55.393 SQ Associations: Not Supported 00:27:55.393 UUID List: Not Supported 00:27:55.393 Multi-Domain Subsystem: Not Supported 00:27:55.393 Fixed Capacity Management: Not Supported 00:27:55.393 Variable Capacity Management: Not Supported 00:27:55.393 Delete Endurance Group: Not Supported 00:27:55.393 Delete NVM Set: Not Supported 00:27:55.393 Extended LBA Formats Supported: Not Supported 00:27:55.393 Flexible Data Placement Supported: Not Supported 00:27:55.393 00:27:55.393 Controller Memory Buffer Support 00:27:55.393 ================================ 00:27:55.393 Supported: No 00:27:55.393 00:27:55.393 Persistent Memory Region Support 00:27:55.393 ================================ 00:27:55.393 Supported: No 00:27:55.393 00:27:55.393 Admin Command Set Attributes 00:27:55.393 ============================ 00:27:55.393 Security Send/Receive: Not Supported 00:27:55.393 Format NVM: Not Supported 00:27:55.393 Firmware Activate/Download: Not Supported 00:27:55.393 Namespace Management: Not Supported 00:27:55.393 Device Self-Test: Not Supported 00:27:55.393 Directives: Not Supported 00:27:55.393 NVMe-MI: Not Supported 00:27:55.393 Virtualization Management: Not Supported 00:27:55.393 Doorbell Buffer Config: Not Supported 00:27:55.393 Get LBA Status Capability: Not Supported 00:27:55.393 Command & Feature Lockdown Capability: Not Supported 00:27:55.393 Abort Command Limit: 1 00:27:55.393 Async Event Request Limit: 4 00:27:55.393 Number of Firmware Slots: N/A 00:27:55.393 Firmware Slot 1 Read-Only: N/A 00:27:55.393 Firmware Activation Without Reset: N/A 00:27:55.393 Multiple Update Detection Support: N/A 00:27:55.393 Firmware Update Granularity: No Information Provided 00:27:55.393 Per-Namespace SMART Log: No 00:27:55.393 Asymmetric Namespace Access Log Page: Not Supported 00:27:55.393 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:55.393 Command Effects Log Page: Not Supported 00:27:55.393 Get Log Page Extended Data: Supported 00:27:55.393 Telemetry Log Pages: Not Supported 00:27:55.393 Persistent Event Log Pages: Not Supported 00:27:55.393 Supported Log Pages Log Page: May Support 00:27:55.393 Commands Supported & Effects Log Page: Not Supported 00:27:55.393 Feature Identifiers & Effects Log Page:May Support 00:27:55.393 NVMe-MI Commands & Effects Log Page: May Support 00:27:55.393 Data Area 4 for Telemetry Log: Not Supported 00:27:55.393 Error Log Page Entries Supported: 128 00:27:55.393 Keep Alive: Not Supported 00:27:55.393 00:27:55.393 NVM Command Set Attributes 00:27:55.393 ========================== 00:27:55.393 Submission Queue Entry Size 00:27:55.393 Max: 1 00:27:55.393 Min: 1 00:27:55.393 Completion Queue Entry Size 00:27:55.393 Max: 1 00:27:55.393 Min: 1 00:27:55.393 Number of Namespaces: 0 00:27:55.393 Compare Command: Not Supported 00:27:55.393 Write Uncorrectable Command: Not Supported 00:27:55.393 Dataset Management Command: Not Supported 00:27:55.393 Write Zeroes Command: Not Supported 00:27:55.393 Set Features Save Field: Not Supported 00:27:55.393 Reservations: Not Supported 00:27:55.393 Timestamp: Not Supported 00:27:55.393 Copy: Not Supported 00:27:55.393 Volatile Write Cache: Not Present 00:27:55.393 Atomic Write Unit (Normal): 1 00:27:55.393 Atomic Write Unit (PFail): 1 00:27:55.393 Atomic Compare & Write Unit: 1 00:27:55.393 Fused Compare & Write: Supported 00:27:55.393 Scatter-Gather List 00:27:55.393 SGL Command Set: Supported 00:27:55.393 SGL Keyed: Supported 00:27:55.393 SGL Bit Bucket Descriptor: Not Supported 00:27:55.393 SGL Metadata Pointer: Not Supported 00:27:55.393 Oversized SGL: Not Supported 00:27:55.393 SGL Metadata Address: Not Supported 00:27:55.393 SGL Offset: Supported 00:27:55.393 Transport SGL Data Block: Not Supported 00:27:55.393 Replay Protected Memory Block: Not Supported 00:27:55.393 00:27:55.393 Firmware Slot Information 00:27:55.393 ========================= 00:27:55.394 Active slot: 0 00:27:55.394 00:27:55.394 00:27:55.394 Error Log 00:27:55.394 ========= 00:27:55.394 00:27:55.394 Active Namespaces 00:27:55.394 ================= 00:27:55.394 Discovery Log Page 00:27:55.394 ================== 00:27:55.394 Generation Counter: 2 00:27:55.394 Number of Records: 2 00:27:55.394 Record Format: 0 00:27:55.394 00:27:55.394 Discovery Log Entry 0 00:27:55.394 ---------------------- 00:27:55.394 Transport Type: 3 (TCP) 00:27:55.394 Address Family: 1 (IPv4) 00:27:55.394 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:55.394 Entry Flags: 00:27:55.394 Duplicate Returned Information: 1 00:27:55.394 Explicit Persistent Connection Support for Discovery: 1 00:27:55.394 Transport Requirements: 00:27:55.394 Secure Channel: Not Required 00:27:55.394 Port ID: 0 (0x0000) 00:27:55.394 Controller ID: 65535 (0xffff) 00:27:55.394 Admin Max SQ Size: 128 00:27:55.394 Transport Service Identifier: 4420 00:27:55.394 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:55.394 Transport Address: 10.0.0.2 00:27:55.394 Discovery Log Entry 1 00:27:55.394 ---------------------- 00:27:55.394 Transport Type: 3 (TCP) 00:27:55.394 Address Family: 1 (IPv4) 00:27:55.394 Subsystem Type: 2 (NVM Subsystem) 00:27:55.394 Entry Flags: 00:27:55.394 Duplicate Returned Information: 0 00:27:55.394 Explicit Persistent Connection Support for Discovery: 0 00:27:55.394 Transport Requirements: 00:27:55.394 Secure Channel: Not Required 00:27:55.394 Port ID: 0 (0x0000) 00:27:55.394 Controller ID: 65535 (0xffff) 00:27:55.394 Admin Max SQ Size: 128 00:27:55.394 Transport Service Identifier: 4420 00:27:55.394 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:55.394 Transport Address: 10.0.0.2 [2024-06-11 12:22:08.245126] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:55.394 [2024-06-11 12:22:08.245138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.394 [2024-06-11 12:22:08.245145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.394 [2024-06-11 12:22:08.245151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.394 [2024-06-11 12:22:08.245157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.394 [2024-06-11 12:22:08.245167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.394 [2024-06-11 12:22:08.245181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.394 [2024-06-11 12:22:08.245194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.394 [2024-06-11 12:22:08.245462] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.394 [2024-06-11 12:22:08.245469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.394 [2024-06-11 12:22:08.245472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.394 [2024-06-11 12:22:08.245483] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245492] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.394 [2024-06-11 12:22:08.245498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.394 [2024-06-11 12:22:08.245511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.394 [2024-06-11 12:22:08.245713] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.394 [2024-06-11 12:22:08.245719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.394 [2024-06-11 12:22:08.245722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.394 [2024-06-11 12:22:08.245731] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:55.394 [2024-06-11 12:22:08.245735] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:55.394 [2024-06-11 12:22:08.245744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245751] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.394 [2024-06-11 12:22:08.245758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.394 [2024-06-11 12:22:08.245768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.394 [2024-06-11 12:22:08.245966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.394 [2024-06-11 12:22:08.245973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.394 [2024-06-11 12:22:08.245976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.394 [2024-06-11 12:22:08.245990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.245997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.394 [2024-06-11 12:22:08.246004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.394 [2024-06-11 12:22:08.246014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.394 [2024-06-11 12:22:08.246217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.394 [2024-06-11 12:22:08.246224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.394 [2024-06-11 12:22:08.246227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.246231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.394 [2024-06-11 12:22:08.246240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.246244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.246247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.394 [2024-06-11 12:22:08.246254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.394 [2024-06-11 12:22:08.246264] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.394 [2024-06-11 12:22:08.246468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.394 [2024-06-11 12:22:08.246475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.394 [2024-06-11 12:22:08.246478] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.246483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.394 [2024-06-11 12:22:08.246493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.394 [2024-06-11 12:22:08.246497] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.246500] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.246507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.246516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.246721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.246728] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.246731] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.246735] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.246744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.246748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.246751] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.246758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.246767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.246919] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.246925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.246928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.246932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.246941] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.246945] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.246948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.246955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.246965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.247176] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.247183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.247186] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.247199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247203] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247206] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.247213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.247223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.247427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.247433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.247437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.247453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.247467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.247477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.247678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.247684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.247687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.247700] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247704] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247708] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.247714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.247724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.247923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.247929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.247932] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247936] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.247945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247949] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.247952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.247959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.247968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.248182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.248189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.248192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.248206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.248219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.248229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.248485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.248491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.248495] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.248508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248513] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248517] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.248523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.248533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.248737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.248743] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.248747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248751] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.248760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248767] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.248774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.248783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.248979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.248985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.248989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.248992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.249002] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.249006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.249010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.249019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.249030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.249241] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.249247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.249251] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.249254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.395 [2024-06-11 12:22:08.249264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.249267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.395 [2024-06-11 12:22:08.249271] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.395 [2024-06-11 12:22:08.249277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.395 [2024-06-11 12:22:08.249287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.395 [2024-06-11 12:22:08.249491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.395 [2024-06-11 12:22:08.249498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.395 [2024-06-11 12:22:08.249501] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.249505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.249514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.249518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.249523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.396 [2024-06-11 12:22:08.249530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.396 [2024-06-11 12:22:08.249539] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.396 [2024-06-11 12:22:08.249745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.396 [2024-06-11 12:22:08.249751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.396 [2024-06-11 12:22:08.249754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.249758] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.249767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.249771] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.249775] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.396 [2024-06-11 12:22:08.249781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.396 [2024-06-11 12:22:08.249791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.396 [2024-06-11 12:22:08.249986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.396 [2024-06-11 12:22:08.249992] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.396 [2024-06-11 12:22:08.249996] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.249999] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.250009] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.396 [2024-06-11 12:22:08.250026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.396 [2024-06-11 12:22:08.250036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.396 [2024-06-11 12:22:08.250250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.396 [2024-06-11 12:22:08.250256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.396 [2024-06-11 12:22:08.250260] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.250273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.396 [2024-06-11 12:22:08.250287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.396 [2024-06-11 12:22:08.250297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.396 [2024-06-11 12:22:08.250501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.396 [2024-06-11 12:22:08.250508] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.396 [2024-06-11 12:22:08.250511] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250515] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.250524] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250528] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.396 [2024-06-11 12:22:08.250540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.396 [2024-06-11 12:22:08.250550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.396 [2024-06-11 12:22:08.250754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.396 [2024-06-11 12:22:08.250761] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.396 [2024-06-11 12:22:08.250764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.250777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250784] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.396 [2024-06-11 12:22:08.250791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.396 [2024-06-11 12:22:08.250801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.396 [2024-06-11 12:22:08.250963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.396 [2024-06-11 12:22:08.250969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.396 [2024-06-11 12:22:08.250973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.250986] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.250993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9b3fd0) 00:27:55.396 [2024-06-11 12:22:08.251000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.396 [2024-06-11 12:22:08.251009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa215a0, cid 3, qid 0 00:27:55.396 [2024-06-11 12:22:08.255026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.396 [2024-06-11 12:22:08.255035] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.396 [2024-06-11 12:22:08.255038] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.396 [2024-06-11 12:22:08.255042] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa215a0) on tqpair=0x9b3fd0 00:27:55.396 [2024-06-11 12:22:08.255050] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:27:55.396 00:27:55.396 12:22:08 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:55.396 [2024-06-11 12:22:08.290541] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:55.396 [2024-06-11 12:22:08.290584] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628804 ] 00:27:55.396 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.396 [2024-06-11 12:22:08.323549] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:55.396 [2024-06-11 12:22:08.323595] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:55.396 [2024-06-11 12:22:08.323603] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:55.396 [2024-06-11 12:22:08.323614] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:55.396 [2024-06-11 12:22:08.323623] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:55.396 [2024-06-11 12:22:08.327041] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:55.396 [2024-06-11 12:22:08.327066] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x130bfd0 0 00:27:55.396 [2024-06-11 12:22:08.335024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:55.396 [2024-06-11 12:22:08.335034] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:55.396 [2024-06-11 12:22:08.335038] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:55.396 [2024-06-11 12:22:08.335042] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:55.396 [2024-06-11 12:22:08.335070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.335076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.335080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.397 [2024-06-11 12:22:08.335091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:55.397 [2024-06-11 12:22:08.335107] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.397 [2024-06-11 12:22:08.343029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.397 [2024-06-11 12:22:08.343037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.397 [2024-06-11 12:22:08.343041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343045] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.397 [2024-06-11 12:22:08.343057] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:55.397 [2024-06-11 12:22:08.343062] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:55.397 [2024-06-11 12:22:08.343068] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:55.397 [2024-06-11 12:22:08.343081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.397 [2024-06-11 12:22:08.343096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.397 [2024-06-11 12:22:08.343108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.397 [2024-06-11 12:22:08.343304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.397 [2024-06-11 12:22:08.343311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.397 [2024-06-11 12:22:08.343315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.397 [2024-06-11 12:22:08.343326] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:55.397 [2024-06-11 12:22:08.343334] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:55.397 [2024-06-11 12:22:08.343340] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.397 [2024-06-11 12:22:08.343354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.397 [2024-06-11 12:22:08.343367] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.397 [2024-06-11 12:22:08.343545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.397 [2024-06-11 12:22:08.343552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.397 [2024-06-11 12:22:08.343555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.397 [2024-06-11 12:22:08.343565] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:55.397 [2024-06-11 12:22:08.343572] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:55.397 [2024-06-11 12:22:08.343579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.397 [2024-06-11 12:22:08.343593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.397 [2024-06-11 12:22:08.343602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.397 [2024-06-11 12:22:08.343810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.397 [2024-06-11 12:22:08.343816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.397 [2024-06-11 12:22:08.343820] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.397 [2024-06-11 12:22:08.343829] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:55.397 [2024-06-11 12:22:08.343838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343842] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.343845] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.397 [2024-06-11 12:22:08.343852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.397 [2024-06-11 12:22:08.343861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.397 [2024-06-11 12:22:08.344009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.397 [2024-06-11 12:22:08.344016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.397 [2024-06-11 12:22:08.344023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344027] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.397 [2024-06-11 12:22:08.344032] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:55.397 [2024-06-11 12:22:08.344037] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:55.397 [2024-06-11 12:22:08.344044] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:55.397 [2024-06-11 12:22:08.344149] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:55.397 [2024-06-11 12:22:08.344153] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:55.397 [2024-06-11 12:22:08.344160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.397 [2024-06-11 12:22:08.344176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.397 [2024-06-11 12:22:08.344186] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.397 [2024-06-11 12:22:08.344368] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.397 [2024-06-11 12:22:08.344374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.397 [2024-06-11 12:22:08.344378] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344381] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.397 [2024-06-11 12:22:08.344387] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:55.397 [2024-06-11 12:22:08.344396] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344400] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344403] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.397 [2024-06-11 12:22:08.344410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.397 [2024-06-11 12:22:08.344419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.397 [2024-06-11 12:22:08.344599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.397 [2024-06-11 12:22:08.344606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.397 [2024-06-11 12:22:08.344609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.397 [2024-06-11 12:22:08.344618] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:55.397 [2024-06-11 12:22:08.344622] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:55.397 [2024-06-11 12:22:08.344630] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:55.397 [2024-06-11 12:22:08.344637] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:55.397 [2024-06-11 12:22:08.344644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.397 [2024-06-11 12:22:08.344648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.344651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.344658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.398 [2024-06-11 12:22:08.344668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.398 [2024-06-11 12:22:08.344904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.398 [2024-06-11 12:22:08.344911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.398 [2024-06-11 12:22:08.344914] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.344918] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=4096, cccid=0 00:27:55.398 [2024-06-11 12:22:08.344923] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1379180) on tqpair(0x130bfd0): expected_datao=0, payload_size=4096 00:27:55.398 [2024-06-11 12:22:08.344938] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.344943] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.385200] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.398 [2024-06-11 12:22:08.385211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.398 [2024-06-11 12:22:08.385214] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.385218] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.398 [2024-06-11 12:22:08.385226] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:55.398 [2024-06-11 12:22:08.385233] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:55.398 [2024-06-11 12:22:08.385238] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:55.398 [2024-06-11 12:22:08.385242] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:55.398 [2024-06-11 12:22:08.385246] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:55.398 [2024-06-11 12:22:08.385251] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.385259] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.385266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.385270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.385274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.385281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:55.398 [2024-06-11 12:22:08.385292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.398 [2024-06-11 12:22:08.389024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.398 [2024-06-11 12:22:08.389032] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.398 [2024-06-11 12:22:08.389035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389039] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379180) on tqpair=0x130bfd0 00:27:55.398 [2024-06-11 12:22:08.389047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389050] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.389060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.398 [2024-06-11 12:22:08.389066] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389073] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.389079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.398 [2024-06-11 12:22:08.389085] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389088] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389092] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.389097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.398 [2024-06-11 12:22:08.389103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.389118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.398 [2024-06-11 12:22:08.389123] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.389133] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.389139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.389153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.398 [2024-06-11 12:22:08.389166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379180, cid 0, qid 0 00:27:55.398 [2024-06-11 12:22:08.389171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13792e0, cid 1, qid 0 00:27:55.398 [2024-06-11 12:22:08.389175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379440, cid 2, qid 0 00:27:55.398 [2024-06-11 12:22:08.389180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.398 [2024-06-11 12:22:08.389184] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379700, cid 4, qid 0 00:27:55.398 [2024-06-11 12:22:08.389379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.398 [2024-06-11 12:22:08.389386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.398 [2024-06-11 12:22:08.389389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379700) on tqpair=0x130bfd0 00:27:55.398 [2024-06-11 12:22:08.389399] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:55.398 [2024-06-11 12:22:08.389404] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.389411] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.389417] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.389423] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389431] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.389437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:55.398 [2024-06-11 12:22:08.389447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379700, cid 4, qid 0 00:27:55.398 [2024-06-11 12:22:08.389645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.398 [2024-06-11 12:22:08.389651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.398 [2024-06-11 12:22:08.389655] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389658] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379700) on tqpair=0x130bfd0 00:27:55.398 [2024-06-11 12:22:08.389709] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.389718] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:55.398 [2024-06-11 12:22:08.389729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bfd0) 00:27:55.398 [2024-06-11 12:22:08.389743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.398 [2024-06-11 12:22:08.389753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379700, cid 4, qid 0 00:27:55.398 [2024-06-11 12:22:08.389954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.398 [2024-06-11 12:22:08.389961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.398 [2024-06-11 12:22:08.389964] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389968] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=4096, cccid=4 00:27:55.398 [2024-06-11 12:22:08.389973] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1379700) on tqpair(0x130bfd0): expected_datao=0, payload_size=4096 00:27:55.398 [2024-06-11 12:22:08.389987] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.398 [2024-06-11 12:22:08.389991] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.690 [2024-06-11 12:22:08.431193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.690 [2024-06-11 12:22:08.431203] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.690 [2024-06-11 12:22:08.431206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.690 [2024-06-11 12:22:08.431210] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379700) on tqpair=0x130bfd0 00:27:55.690 [2024-06-11 12:22:08.431224] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:55.690 [2024-06-11 12:22:08.431233] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.431243] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.431249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431256] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.431263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.431274] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379700, cid 4, qid 0 00:27:55.691 [2024-06-11 12:22:08.431485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.691 [2024-06-11 12:22:08.431494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.691 [2024-06-11 12:22:08.431500] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431503] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=4096, cccid=4 00:27:55.691 [2024-06-11 12:22:08.431508] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1379700) on tqpair(0x130bfd0): expected_datao=0, payload_size=4096 00:27:55.691 [2024-06-11 12:22:08.431515] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431518] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.691 [2024-06-11 12:22:08.431653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.691 [2024-06-11 12:22:08.431656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379700) on tqpair=0x130bfd0 00:27:55.691 [2024-06-11 12:22:08.431672] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.431683] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.431690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431694] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.431704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.431714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379700, cid 4, qid 0 00:27:55.691 [2024-06-11 12:22:08.431931] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.691 [2024-06-11 12:22:08.431938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.691 [2024-06-11 12:22:08.431941] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431944] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=4096, cccid=4 00:27:55.691 [2024-06-11 12:22:08.431949] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1379700) on tqpair(0x130bfd0): expected_datao=0, payload_size=4096 00:27:55.691 [2024-06-11 12:22:08.431963] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.431966] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477025] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.691 [2024-06-11 12:22:08.477035] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.691 [2024-06-11 12:22:08.477039] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477042] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379700) on tqpair=0x130bfd0 00:27:55.691 [2024-06-11 12:22:08.477051] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.477059] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.477067] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.477073] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.477078] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.477082] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:55.691 [2024-06-11 12:22:08.477087] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:55.691 [2024-06-11 12:22:08.477092] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:55.691 [2024-06-11 12:22:08.477104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.477118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.477125] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.477140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.691 [2024-06-11 12:22:08.477154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379700, cid 4, qid 0 00:27:55.691 [2024-06-11 12:22:08.477159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379860, cid 5, qid 0 00:27:55.691 [2024-06-11 12:22:08.477241] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.691 [2024-06-11 12:22:08.477248] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.691 [2024-06-11 12:22:08.477251] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477255] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379700) on tqpair=0x130bfd0 00:27:55.691 [2024-06-11 12:22:08.477262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.691 [2024-06-11 12:22:08.477268] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.691 [2024-06-11 12:22:08.477271] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477275] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379860) on tqpair=0x130bfd0 00:27:55.691 [2024-06-11 12:22:08.477284] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.477298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.477307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379860, cid 5, qid 0 00:27:55.691 [2024-06-11 12:22:08.477456] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.691 [2024-06-11 12:22:08.477462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.691 [2024-06-11 12:22:08.477465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379860) on tqpair=0x130bfd0 00:27:55.691 [2024-06-11 12:22:08.477478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.477492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.477501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379860, cid 5, qid 0 00:27:55.691 [2024-06-11 12:22:08.477687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.691 [2024-06-11 12:22:08.477694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.691 [2024-06-11 12:22:08.477697] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477701] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379860) on tqpair=0x130bfd0 00:27:55.691 [2024-06-11 12:22:08.477710] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.477724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.477733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379860, cid 5, qid 0 00:27:55.691 [2024-06-11 12:22:08.477951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.691 [2024-06-11 12:22:08.477960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.691 [2024-06-11 12:22:08.477963] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379860) on tqpair=0x130bfd0 00:27:55.691 [2024-06-11 12:22:08.477978] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.477985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.477992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.477999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.478002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.478005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.478012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.691 [2024-06-11 12:22:08.478022] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.478026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.691 [2024-06-11 12:22:08.478030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x130bfd0) 00:27:55.691 [2024-06-11 12:22:08.478036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.692 [2024-06-11 12:22:08.478042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x130bfd0) 00:27:55.692 [2024-06-11 12:22:08.478055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.692 [2024-06-11 12:22:08.478067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379860, cid 5, qid 0 00:27:55.692 [2024-06-11 12:22:08.478071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379700, cid 4, qid 0 00:27:55.692 [2024-06-11 12:22:08.478076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13799c0, cid 6, qid 0 00:27:55.692 [2024-06-11 12:22:08.478081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379b20, cid 7, qid 0 00:27:55.692 [2024-06-11 12:22:08.478319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.692 [2024-06-11 12:22:08.478326] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.692 [2024-06-11 12:22:08.478329] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478333] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=8192, cccid=5 00:27:55.692 [2024-06-11 12:22:08.478337] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1379860) on tqpair(0x130bfd0): expected_datao=0, payload_size=8192 00:27:55.692 [2024-06-11 12:22:08.478416] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478420] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.692 [2024-06-11 12:22:08.478432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.692 [2024-06-11 12:22:08.478435] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478438] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=512, cccid=4 00:27:55.692 [2024-06-11 12:22:08.478443] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1379700) on tqpair(0x130bfd0): expected_datao=0, payload_size=512 00:27:55.692 [2024-06-11 12:22:08.478452] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478455] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.692 [2024-06-11 12:22:08.478466] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.692 [2024-06-11 12:22:08.478470] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478473] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=512, cccid=6 00:27:55.692 [2024-06-11 12:22:08.478477] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13799c0) on tqpair(0x130bfd0): expected_datao=0, payload_size=512 00:27:55.692 [2024-06-11 12:22:08.478484] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478488] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478493] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:55.692 [2024-06-11 12:22:08.478499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:55.692 [2024-06-11 12:22:08.478502] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478506] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bfd0): datao=0, datal=4096, cccid=7 00:27:55.692 [2024-06-11 12:22:08.478510] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1379b20) on tqpair(0x130bfd0): expected_datao=0, payload_size=4096 00:27:55.692 [2024-06-11 12:22:08.478526] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478530] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478777] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.692 [2024-06-11 12:22:08.478782] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.692 [2024-06-11 12:22:08.478786] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478789] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379860) on tqpair=0x130bfd0 00:27:55.692 [2024-06-11 12:22:08.478802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.692 [2024-06-11 12:22:08.478808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.692 [2024-06-11 12:22:08.478811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379700) on tqpair=0x130bfd0 00:27:55.692 [2024-06-11 12:22:08.478824] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.692 [2024-06-11 12:22:08.478830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.692 [2024-06-11 12:22:08.478833] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478837] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13799c0) on tqpair=0x130bfd0 00:27:55.692 [2024-06-11 12:22:08.478844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.692 [2024-06-11 12:22:08.478850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.692 [2024-06-11 12:22:08.478853] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.692 [2024-06-11 12:22:08.478857] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379b20) on tqpair=0x130bfd0 00:27:55.692 ===================================================== 00:27:55.692 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.692 ===================================================== 00:27:55.692 Controller Capabilities/Features 00:27:55.692 ================================ 00:27:55.692 Vendor ID: 8086 00:27:55.692 Subsystem Vendor ID: 8086 00:27:55.692 Serial Number: SPDK00000000000001 00:27:55.692 Model Number: SPDK bdev Controller 00:27:55.692 Firmware Version: 24.01.1 00:27:55.692 Recommended Arb Burst: 6 00:27:55.692 IEEE OUI Identifier: e4 d2 5c 00:27:55.692 Multi-path I/O 00:27:55.692 May have multiple subsystem ports: Yes 00:27:55.692 May have multiple controllers: Yes 00:27:55.692 Associated with SR-IOV VF: No 00:27:55.692 Max Data Transfer Size: 131072 00:27:55.692 Max Number of Namespaces: 32 00:27:55.692 Max Number of I/O Queues: 127 00:27:55.692 NVMe Specification Version (VS): 1.3 00:27:55.692 NVMe Specification Version (Identify): 1.3 00:27:55.692 Maximum Queue Entries: 128 00:27:55.692 Contiguous Queues Required: Yes 00:27:55.692 Arbitration Mechanisms Supported 00:27:55.692 Weighted Round Robin: Not Supported 00:27:55.692 Vendor Specific: Not Supported 00:27:55.692 Reset Timeout: 15000 ms 00:27:55.692 Doorbell Stride: 4 bytes 00:27:55.692 NVM Subsystem Reset: Not Supported 00:27:55.692 Command Sets Supported 00:27:55.692 NVM Command Set: Supported 00:27:55.692 Boot Partition: Not Supported 00:27:55.692 Memory Page Size Minimum: 4096 bytes 00:27:55.692 Memory Page Size Maximum: 4096 bytes 00:27:55.692 Persistent Memory Region: Not Supported 00:27:55.692 Optional Asynchronous Events Supported 00:27:55.692 Namespace Attribute Notices: Supported 00:27:55.692 Firmware Activation Notices: Not Supported 00:27:55.692 ANA Change Notices: Not Supported 00:27:55.692 PLE Aggregate Log Change Notices: Not Supported 00:27:55.692 LBA Status Info Alert Notices: Not Supported 00:27:55.692 EGE Aggregate Log Change Notices: Not Supported 00:27:55.692 Normal NVM Subsystem Shutdown event: Not Supported 00:27:55.692 Zone Descriptor Change Notices: Not Supported 00:27:55.692 Discovery Log Change Notices: Not Supported 00:27:55.692 Controller Attributes 00:27:55.692 128-bit Host Identifier: Supported 00:27:55.692 Non-Operational Permissive Mode: Not Supported 00:27:55.692 NVM Sets: Not Supported 00:27:55.692 Read Recovery Levels: Not Supported 00:27:55.692 Endurance Groups: Not Supported 00:27:55.692 Predictable Latency Mode: Not Supported 00:27:55.692 Traffic Based Keep ALive: Not Supported 00:27:55.692 Namespace Granularity: Not Supported 00:27:55.692 SQ Associations: Not Supported 00:27:55.692 UUID List: Not Supported 00:27:55.692 Multi-Domain Subsystem: Not Supported 00:27:55.692 Fixed Capacity Management: Not Supported 00:27:55.692 Variable Capacity Management: Not Supported 00:27:55.692 Delete Endurance Group: Not Supported 00:27:55.692 Delete NVM Set: Not Supported 00:27:55.692 Extended LBA Formats Supported: Not Supported 00:27:55.692 Flexible Data Placement Supported: Not Supported 00:27:55.692 00:27:55.692 Controller Memory Buffer Support 00:27:55.692 ================================ 00:27:55.692 Supported: No 00:27:55.692 00:27:55.692 Persistent Memory Region Support 00:27:55.692 ================================ 00:27:55.692 Supported: No 00:27:55.692 00:27:55.692 Admin Command Set Attributes 00:27:55.692 ============================ 00:27:55.692 Security Send/Receive: Not Supported 00:27:55.692 Format NVM: Not Supported 00:27:55.692 Firmware Activate/Download: Not Supported 00:27:55.692 Namespace Management: Not Supported 00:27:55.692 Device Self-Test: Not Supported 00:27:55.692 Directives: Not Supported 00:27:55.692 NVMe-MI: Not Supported 00:27:55.692 Virtualization Management: Not Supported 00:27:55.692 Doorbell Buffer Config: Not Supported 00:27:55.692 Get LBA Status Capability: Not Supported 00:27:55.692 Command & Feature Lockdown Capability: Not Supported 00:27:55.692 Abort Command Limit: 4 00:27:55.692 Async Event Request Limit: 4 00:27:55.692 Number of Firmware Slots: N/A 00:27:55.692 Firmware Slot 1 Read-Only: N/A 00:27:55.692 Firmware Activation Without Reset: N/A 00:27:55.692 Multiple Update Detection Support: N/A 00:27:55.692 Firmware Update Granularity: No Information Provided 00:27:55.692 Per-Namespace SMART Log: No 00:27:55.692 Asymmetric Namespace Access Log Page: Not Supported 00:27:55.692 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:55.692 Command Effects Log Page: Supported 00:27:55.693 Get Log Page Extended Data: Supported 00:27:55.693 Telemetry Log Pages: Not Supported 00:27:55.693 Persistent Event Log Pages: Not Supported 00:27:55.693 Supported Log Pages Log Page: May Support 00:27:55.693 Commands Supported & Effects Log Page: Not Supported 00:27:55.693 Feature Identifiers & Effects Log Page:May Support 00:27:55.693 NVMe-MI Commands & Effects Log Page: May Support 00:27:55.693 Data Area 4 for Telemetry Log: Not Supported 00:27:55.693 Error Log Page Entries Supported: 128 00:27:55.693 Keep Alive: Supported 00:27:55.693 Keep Alive Granularity: 10000 ms 00:27:55.693 00:27:55.693 NVM Command Set Attributes 00:27:55.693 ========================== 00:27:55.693 Submission Queue Entry Size 00:27:55.693 Max: 64 00:27:55.693 Min: 64 00:27:55.693 Completion Queue Entry Size 00:27:55.693 Max: 16 00:27:55.693 Min: 16 00:27:55.693 Number of Namespaces: 32 00:27:55.693 Compare Command: Supported 00:27:55.693 Write Uncorrectable Command: Not Supported 00:27:55.693 Dataset Management Command: Supported 00:27:55.693 Write Zeroes Command: Supported 00:27:55.693 Set Features Save Field: Not Supported 00:27:55.693 Reservations: Supported 00:27:55.693 Timestamp: Not Supported 00:27:55.693 Copy: Supported 00:27:55.693 Volatile Write Cache: Present 00:27:55.693 Atomic Write Unit (Normal): 1 00:27:55.693 Atomic Write Unit (PFail): 1 00:27:55.693 Atomic Compare & Write Unit: 1 00:27:55.693 Fused Compare & Write: Supported 00:27:55.693 Scatter-Gather List 00:27:55.693 SGL Command Set: Supported 00:27:55.693 SGL Keyed: Supported 00:27:55.693 SGL Bit Bucket Descriptor: Not Supported 00:27:55.693 SGL Metadata Pointer: Not Supported 00:27:55.693 Oversized SGL: Not Supported 00:27:55.693 SGL Metadata Address: Not Supported 00:27:55.693 SGL Offset: Supported 00:27:55.693 Transport SGL Data Block: Not Supported 00:27:55.693 Replay Protected Memory Block: Not Supported 00:27:55.693 00:27:55.693 Firmware Slot Information 00:27:55.693 ========================= 00:27:55.693 Active slot: 1 00:27:55.693 Slot 1 Firmware Revision: 24.01.1 00:27:55.693 00:27:55.693 00:27:55.693 Commands Supported and Effects 00:27:55.693 ============================== 00:27:55.693 Admin Commands 00:27:55.693 -------------- 00:27:55.693 Get Log Page (02h): Supported 00:27:55.693 Identify (06h): Supported 00:27:55.693 Abort (08h): Supported 00:27:55.693 Set Features (09h): Supported 00:27:55.693 Get Features (0Ah): Supported 00:27:55.693 Asynchronous Event Request (0Ch): Supported 00:27:55.693 Keep Alive (18h): Supported 00:27:55.693 I/O Commands 00:27:55.693 ------------ 00:27:55.693 Flush (00h): Supported LBA-Change 00:27:55.693 Write (01h): Supported LBA-Change 00:27:55.693 Read (02h): Supported 00:27:55.693 Compare (05h): Supported 00:27:55.693 Write Zeroes (08h): Supported LBA-Change 00:27:55.693 Dataset Management (09h): Supported LBA-Change 00:27:55.693 Copy (19h): Supported LBA-Change 00:27:55.693 Unknown (79h): Supported LBA-Change 00:27:55.693 Unknown (7Ah): Supported 00:27:55.693 00:27:55.693 Error Log 00:27:55.693 ========= 00:27:55.693 00:27:55.693 Arbitration 00:27:55.693 =========== 00:27:55.693 Arbitration Burst: 1 00:27:55.693 00:27:55.693 Power Management 00:27:55.693 ================ 00:27:55.693 Number of Power States: 1 00:27:55.693 Current Power State: Power State #0 00:27:55.693 Power State #0: 00:27:55.693 Max Power: 0.00 W 00:27:55.693 Non-Operational State: Operational 00:27:55.693 Entry Latency: Not Reported 00:27:55.693 Exit Latency: Not Reported 00:27:55.693 Relative Read Throughput: 0 00:27:55.693 Relative Read Latency: 0 00:27:55.693 Relative Write Throughput: 0 00:27:55.693 Relative Write Latency: 0 00:27:55.693 Idle Power: Not Reported 00:27:55.693 Active Power: Not Reported 00:27:55.693 Non-Operational Permissive Mode: Not Supported 00:27:55.693 00:27:55.693 Health Information 00:27:55.693 ================== 00:27:55.693 Critical Warnings: 00:27:55.693 Available Spare Space: OK 00:27:55.693 Temperature: OK 00:27:55.693 Device Reliability: OK 00:27:55.693 Read Only: No 00:27:55.693 Volatile Memory Backup: OK 00:27:55.693 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:55.693 Temperature Threshold: [2024-06-11 12:22:08.478958] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.478963] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.478966] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x130bfd0) 00:27:55.693 [2024-06-11 12:22:08.478973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.693 [2024-06-11 12:22:08.478984] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1379b20, cid 7, qid 0 00:27:55.693 [2024-06-11 12:22:08.479138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.693 [2024-06-11 12:22:08.479145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.693 [2024-06-11 12:22:08.479149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479152] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1379b20) on tqpair=0x130bfd0 00:27:55.693 [2024-06-11 12:22:08.479182] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:55.693 [2024-06-11 12:22:08.479193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.693 [2024-06-11 12:22:08.479199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.693 [2024-06-11 12:22:08.479205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.693 [2024-06-11 12:22:08.479211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.693 [2024-06-11 12:22:08.479218] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.693 [2024-06-11 12:22:08.479232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.693 [2024-06-11 12:22:08.479244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.693 [2024-06-11 12:22:08.479421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.693 [2024-06-11 12:22:08.479427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.693 [2024-06-11 12:22:08.479430] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.693 [2024-06-11 12:22:08.479441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.693 [2024-06-11 12:22:08.479455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.693 [2024-06-11 12:22:08.479467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.693 [2024-06-11 12:22:08.479663] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.693 [2024-06-11 12:22:08.479670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.693 [2024-06-11 12:22:08.479674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.693 [2024-06-11 12:22:08.479682] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:55.693 [2024-06-11 12:22:08.479687] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:55.693 [2024-06-11 12:22:08.479696] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.693 [2024-06-11 12:22:08.479710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.693 [2024-06-11 12:22:08.479720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.693 [2024-06-11 12:22:08.479875] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.693 [2024-06-11 12:22:08.479883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.693 [2024-06-11 12:22:08.479886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.693 [2024-06-11 12:22:08.479900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.479908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.693 [2024-06-11 12:22:08.479914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.693 [2024-06-11 12:22:08.479924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.693 [2024-06-11 12:22:08.480125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.693 [2024-06-11 12:22:08.480131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.693 [2024-06-11 12:22:08.480135] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.693 [2024-06-11 12:22:08.480138] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.694 [2024-06-11 12:22:08.480149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.694 [2024-06-11 12:22:08.480162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.694 [2024-06-11 12:22:08.480172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.694 [2024-06-11 12:22:08.480386] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.694 [2024-06-11 12:22:08.480392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.694 [2024-06-11 12:22:08.480396] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480399] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.694 [2024-06-11 12:22:08.480410] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.694 [2024-06-11 12:22:08.480423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.694 [2024-06-11 12:22:08.480433] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.694 [2024-06-11 12:22:08.480644] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.694 [2024-06-11 12:22:08.480650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.694 [2024-06-11 12:22:08.480653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.694 [2024-06-11 12:22:08.480667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.694 [2024-06-11 12:22:08.480681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.694 [2024-06-11 12:22:08.480690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.694 [2024-06-11 12:22:08.480907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.694 [2024-06-11 12:22:08.480913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.694 [2024-06-11 12:22:08.480918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.694 [2024-06-11 12:22:08.480933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480936] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.480940] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bfd0) 00:27:55.694 [2024-06-11 12:22:08.480946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.694 [2024-06-11 12:22:08.480956] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13795a0, cid 3, qid 0 00:27:55.694 [2024-06-11 12:22:08.485026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:55.694 [2024-06-11 12:22:08.485034] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:55.694 [2024-06-11 12:22:08.485037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:55.694 [2024-06-11 12:22:08.485041] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13795a0) on tqpair=0x130bfd0 00:27:55.694 [2024-06-11 12:22:08.485049] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:55.694 0 Kelvin (-273 Celsius) 00:27:55.694 Available Spare: 0% 00:27:55.694 Available Spare Threshold: 0% 00:27:55.694 Life Percentage Used: 0% 00:27:55.694 Data Units Read: 0 00:27:55.694 Data Units Written: 0 00:27:55.694 Host Read Commands: 0 00:27:55.694 Host Write Commands: 0 00:27:55.694 Controller Busy Time: 0 minutes 00:27:55.694 Power Cycles: 0 00:27:55.694 Power On Hours: 0 hours 00:27:55.694 Unsafe Shutdowns: 0 00:27:55.694 Unrecoverable Media Errors: 0 00:27:55.694 Lifetime Error Log Entries: 0 00:27:55.694 Warning Temperature Time: 0 minutes 00:27:55.694 Critical Temperature Time: 0 minutes 00:27:55.694 00:27:55.694 Number of Queues 00:27:55.694 ================ 00:27:55.694 Number of I/O Submission Queues: 127 00:27:55.694 Number of I/O Completion Queues: 127 00:27:55.694 00:27:55.694 Active Namespaces 00:27:55.694 ================= 00:27:55.694 Namespace ID:1 00:27:55.694 Error Recovery Timeout: Unlimited 00:27:55.694 Command Set Identifier: NVM (00h) 00:27:55.694 Deallocate: Supported 00:27:55.694 Deallocated/Unwritten Error: Not Supported 00:27:55.694 Deallocated Read Value: Unknown 00:27:55.694 Deallocate in Write Zeroes: Not Supported 00:27:55.694 Deallocated Guard Field: 0xFFFF 00:27:55.694 Flush: Supported 00:27:55.694 Reservation: Supported 00:27:55.694 Namespace Sharing Capabilities: Multiple Controllers 00:27:55.694 Size (in LBAs): 131072 (0GiB) 00:27:55.694 Capacity (in LBAs): 131072 (0GiB) 00:27:55.694 Utilization (in LBAs): 131072 (0GiB) 00:27:55.694 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:55.694 EUI64: ABCDEF0123456789 00:27:55.694 UUID: 2d3b7ee4-bb5a-4989-b8f6-170e710eebaf 00:27:55.694 Thin Provisioning: Not Supported 00:27:55.694 Per-NS Atomic Units: Yes 00:27:55.694 Atomic Boundary Size (Normal): 0 00:27:55.694 Atomic Boundary Size (PFail): 0 00:27:55.694 Atomic Boundary Offset: 0 00:27:55.694 Maximum Single Source Range Length: 65535 00:27:55.694 Maximum Copy Length: 65535 00:27:55.694 Maximum Source Range Count: 1 00:27:55.694 NGUID/EUI64 Never Reused: No 00:27:55.694 Namespace Write Protected: No 00:27:55.694 Number of LBA Formats: 1 00:27:55.694 Current LBA Format: LBA Format #00 00:27:55.694 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:55.694 00:27:55.694 12:22:08 -- host/identify.sh@51 -- # sync 00:27:55.694 12:22:08 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.694 12:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.694 12:22:08 -- common/autotest_common.sh@10 -- # set +x 00:27:55.694 12:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.694 12:22:08 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:55.694 12:22:08 -- host/identify.sh@56 -- # nvmftestfini 00:27:55.694 12:22:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:55.694 12:22:08 -- nvmf/common.sh@116 -- # sync 00:27:55.694 12:22:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:55.694 12:22:08 -- nvmf/common.sh@119 -- # set +e 00:27:55.694 12:22:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:55.694 12:22:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:55.694 rmmod nvme_tcp 00:27:55.694 rmmod nvme_fabrics 00:27:55.694 rmmod nvme_keyring 00:27:55.694 12:22:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:55.694 12:22:08 -- nvmf/common.sh@123 -- # set -e 00:27:55.694 12:22:08 -- nvmf/common.sh@124 -- # return 0 00:27:55.694 12:22:08 -- nvmf/common.sh@477 -- # '[' -n 1628681 ']' 00:27:55.694 12:22:08 -- nvmf/common.sh@478 -- # killprocess 1628681 00:27:55.694 12:22:08 -- common/autotest_common.sh@926 -- # '[' -z 1628681 ']' 00:27:55.694 12:22:08 -- common/autotest_common.sh@930 -- # kill -0 1628681 00:27:55.694 12:22:08 -- common/autotest_common.sh@931 -- # uname 00:27:55.694 12:22:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.694 12:22:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1628681 00:27:55.694 12:22:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:55.694 12:22:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:55.694 12:22:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1628681' 00:27:55.694 killing process with pid 1628681 00:27:55.694 12:22:08 -- common/autotest_common.sh@945 -- # kill 1628681 00:27:55.694 [2024-06-11 12:22:08.635275] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:55.694 12:22:08 -- common/autotest_common.sh@950 -- # wait 1628681 00:27:55.956 12:22:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:55.956 12:22:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:55.956 12:22:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:55.956 12:22:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.956 12:22:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:55.956 12:22:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.956 12:22:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.956 12:22:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.868 12:22:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:57.868 00:27:57.868 real 0m11.238s 00:27:57.868 user 0m8.222s 00:27:57.868 sys 0m5.831s 00:27:57.868 12:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.868 12:22:10 -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 ************************************ 00:27:57.868 END TEST nvmf_identify 00:27:57.868 ************************************ 00:27:57.868 12:22:10 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:57.868 12:22:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:57.868 12:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.868 12:22:10 -- common/autotest_common.sh@10 -- # set +x 00:27:57.868 ************************************ 00:27:57.868 START TEST nvmf_perf 00:27:57.868 ************************************ 00:27:57.868 12:22:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:58.128 * Looking for test storage... 00:27:58.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.128 12:22:10 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.128 12:22:10 -- nvmf/common.sh@7 -- # uname -s 00:27:58.128 12:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.128 12:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.128 12:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.128 12:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.128 12:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.128 12:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.128 12:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.128 12:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.128 12:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.128 12:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.128 12:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.128 12:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.128 12:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.128 12:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.128 12:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.128 12:22:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.128 12:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.129 12:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.129 12:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.129 12:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.129 12:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.129 12:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.129 12:22:10 -- paths/export.sh@5 -- # export PATH 00:27:58.129 12:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.129 12:22:10 -- nvmf/common.sh@46 -- # : 0 00:27:58.129 12:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:58.129 12:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:58.129 12:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:58.129 12:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.129 12:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.129 12:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:58.129 12:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:58.129 12:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:58.129 12:22:10 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:58.129 12:22:10 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:58.129 12:22:10 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:58.129 12:22:10 -- host/perf.sh@17 -- # nvmftestinit 00:27:58.129 12:22:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:58.129 12:22:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.129 12:22:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:58.129 12:22:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:58.129 12:22:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:58.129 12:22:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.129 12:22:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.129 12:22:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.129 12:22:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:58.129 12:22:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:58.129 12:22:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:58.129 12:22:11 -- common/autotest_common.sh@10 -- # set +x 00:28:04.712 12:22:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:04.712 12:22:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:04.712 12:22:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:04.712 12:22:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:04.712 12:22:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:04.712 12:22:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:04.712 12:22:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:04.712 12:22:17 -- nvmf/common.sh@294 -- # net_devs=() 00:28:04.712 12:22:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:04.712 12:22:17 -- nvmf/common.sh@295 -- # e810=() 00:28:04.712 12:22:17 -- nvmf/common.sh@295 -- # local -ga e810 00:28:04.712 12:22:17 -- nvmf/common.sh@296 -- # x722=() 00:28:04.712 12:22:17 -- nvmf/common.sh@296 -- # local -ga x722 00:28:04.712 12:22:17 -- nvmf/common.sh@297 -- # mlx=() 00:28:04.712 12:22:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:04.712 12:22:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.712 12:22:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:04.712 12:22:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:04.712 12:22:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:04.712 12:22:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:04.712 12:22:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:04.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:04.712 12:22:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:04.712 12:22:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:04.712 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:04.712 12:22:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:04.712 12:22:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:04.712 12:22:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.712 12:22:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:04.712 12:22:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.712 12:22:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:04.712 Found net devices under 0000:31:00.0: cvl_0_0 00:28:04.712 12:22:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.712 12:22:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:04.712 12:22:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.712 12:22:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:04.712 12:22:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.712 12:22:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:04.712 Found net devices under 0000:31:00.1: cvl_0_1 00:28:04.712 12:22:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.712 12:22:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:04.712 12:22:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:04.712 12:22:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:04.712 12:22:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:04.712 12:22:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.712 12:22:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.712 12:22:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.713 12:22:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:04.713 12:22:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.713 12:22:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.713 12:22:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:04.713 12:22:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.713 12:22:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.713 12:22:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:04.713 12:22:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:04.713 12:22:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.713 12:22:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.713 12:22:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.713 12:22:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.713 12:22:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:04.713 12:22:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.713 12:22:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.713 12:22:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.713 12:22:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:04.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:28:04.713 00:28:04.713 --- 10.0.0.2 ping statistics --- 00:28:04.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.713 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:28:04.713 12:22:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:28:04.713 00:28:04.713 --- 10.0.0.1 ping statistics --- 00:28:04.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.713 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:28:04.713 12:22:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.713 12:22:17 -- nvmf/common.sh@410 -- # return 0 00:28:04.713 12:22:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:04.713 12:22:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.713 12:22:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:04.713 12:22:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:04.713 12:22:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.713 12:22:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:04.713 12:22:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:04.713 12:22:17 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:04.713 12:22:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:04.713 12:22:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:04.713 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:28:04.713 12:22:17 -- nvmf/common.sh@469 -- # nvmfpid=1633011 00:28:04.713 12:22:17 -- nvmf/common.sh@470 -- # waitforlisten 1633011 00:28:04.713 12:22:17 -- common/autotest_common.sh@819 -- # '[' -z 1633011 ']' 00:28:04.713 12:22:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.713 12:22:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:04.713 12:22:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.713 12:22:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:04.713 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:28:04.713 12:22:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:04.713 [2024-06-11 12:22:17.601440] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:04.713 [2024-06-11 12:22:17.601505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.713 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.713 [2024-06-11 12:22:17.673663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.713 [2024-06-11 12:22:17.712163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:04.713 [2024-06-11 12:22:17.712310] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.713 [2024-06-11 12:22:17.712322] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.713 [2024-06-11 12:22:17.712331] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.713 [2024-06-11 12:22:17.712473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.713 [2024-06-11 12:22:17.712597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.713 [2024-06-11 12:22:17.712760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.713 [2024-06-11 12:22:17.712762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.655 12:22:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:05.655 12:22:18 -- common/autotest_common.sh@852 -- # return 0 00:28:05.655 12:22:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:05.655 12:22:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:05.655 12:22:18 -- common/autotest_common.sh@10 -- # set +x 00:28:05.655 12:22:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.655 12:22:18 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:05.655 12:22:18 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:05.916 12:22:18 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:05.916 12:22:18 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:06.176 12:22:19 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:06.176 12:22:19 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:06.437 12:22:19 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:06.437 12:22:19 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:06.437 12:22:19 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:06.437 12:22:19 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:06.437 12:22:19 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:06.437 [2024-06-11 12:22:19.348157] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.437 12:22:19 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.698 12:22:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:06.698 12:22:19 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.698 12:22:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:06.699 12:22:19 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:06.959 12:22:19 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.220 [2024-06-11 12:22:19.994717] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.220 12:22:20 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:07.220 12:22:20 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:07.220 12:22:20 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:07.220 12:22:20 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:07.220 12:22:20 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:08.602 Initializing NVMe Controllers 00:28:08.602 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:08.602 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:08.602 Initialization complete. Launching workers. 00:28:08.602 ======================================================== 00:28:08.602 Latency(us) 00:28:08.602 Device Information : IOPS MiB/s Average min max 00:28:08.602 PCIE (0000:65:00.0) NSID 1 from core 0: 81182.04 317.12 393.54 13.35 4776.65 00:28:08.602 ======================================================== 00:28:08.602 Total : 81182.04 317.12 393.54 13.35 4776.65 00:28:08.602 00:28:08.602 12:22:21 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:08.602 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.988 Initializing NVMe Controllers 00:28:09.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:09.988 Initialization complete. Launching workers. 00:28:09.988 ======================================================== 00:28:09.988 Latency(us) 00:28:09.988 Device Information : IOPS MiB/s Average min max 00:28:09.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.00 0.38 10367.07 226.27 45796.83 00:28:09.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16485.53 7954.87 47900.93 00:28:09.988 ======================================================== 00:28:09.988 Total : 158.00 0.62 12729.26 226.27 47900.93 00:28:09.988 00:28:09.988 12:22:22 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:09.988 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.929 Initializing NVMe Controllers 00:28:10.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:10.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:10.929 Initialization complete. Launching workers. 00:28:10.929 ======================================================== 00:28:10.929 Latency(us) 00:28:10.929 Device Information : IOPS MiB/s Average min max 00:28:10.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11266.98 44.01 2851.26 398.18 6430.44 00:28:10.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3856.99 15.07 8335.10 6424.15 16448.73 00:28:10.929 ======================================================== 00:28:10.929 Total : 15123.97 59.08 4249.78 398.18 16448.73 00:28:10.929 00:28:10.929 12:22:23 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:10.929 12:22:23 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:10.929 12:22:23 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:10.929 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.471 Initializing NVMe Controllers 00:28:13.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.471 Controller IO queue size 128, less than required. 00:28:13.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.471 Controller IO queue size 128, less than required. 00:28:13.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:13.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:13.471 Initialization complete. Launching workers. 00:28:13.471 ======================================================== 00:28:13.471 Latency(us) 00:28:13.471 Device Information : IOPS MiB/s Average min max 00:28:13.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1549.16 387.29 84046.68 55260.47 127113.70 00:28:13.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.37 147.34 223981.87 72155.47 340547.11 00:28:13.471 ======================================================== 00:28:13.471 Total : 2138.53 534.63 122612.27 55260.47 340547.11 00:28:13.471 00:28:13.471 12:22:26 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:13.471 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.731 No valid NVMe controllers or AIO or URING devices found 00:28:13.731 Initializing NVMe Controllers 00:28:13.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.731 Controller IO queue size 128, less than required. 00:28:13.731 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.731 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:13.731 Controller IO queue size 128, less than required. 00:28:13.731 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:13.731 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:13.731 WARNING: Some requested NVMe devices were skipped 00:28:13.731 12:22:26 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:13.731 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.274 Initializing NVMe Controllers 00:28:16.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.274 Controller IO queue size 128, less than required. 00:28:16.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:16.275 Controller IO queue size 128, less than required. 00:28:16.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:16.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:16.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:16.275 Initialization complete. Launching workers. 00:28:16.275 00:28:16.275 ==================== 00:28:16.275 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:16.275 TCP transport: 00:28:16.275 polls: 21026 00:28:16.275 idle_polls: 12515 00:28:16.275 sock_completions: 8511 00:28:16.275 nvme_completions: 6697 00:28:16.275 submitted_requests: 10244 00:28:16.275 queued_requests: 1 00:28:16.275 00:28:16.275 ==================== 00:28:16.275 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:16.275 TCP transport: 00:28:16.275 polls: 21364 00:28:16.275 idle_polls: 10039 00:28:16.275 sock_completions: 11325 00:28:16.275 nvme_completions: 6927 00:28:16.275 submitted_requests: 10536 00:28:16.275 queued_requests: 1 00:28:16.275 ======================================================== 00:28:16.275 Latency(us) 00:28:16.275 Device Information : IOPS MiB/s Average min max 00:28:16.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1736.11 434.03 75074.89 46699.23 119161.94 00:28:16.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1794.07 448.52 71899.04 41789.98 120487.53 00:28:16.275 ======================================================== 00:28:16.275 Total : 3530.18 882.55 73460.89 41789.98 120487.53 00:28:16.275 00:28:16.275 12:22:29 -- host/perf.sh@66 -- # sync 00:28:16.275 12:22:29 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.535 12:22:29 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:16.535 12:22:29 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:28:16.535 12:22:29 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:17.475 12:22:30 -- host/perf.sh@72 -- # ls_guid=150a576a-fa90-46e6-ab3c-ecb287e6d494 00:28:17.475 12:22:30 -- host/perf.sh@73 -- # get_lvs_free_mb 150a576a-fa90-46e6-ab3c-ecb287e6d494 00:28:17.475 12:22:30 -- common/autotest_common.sh@1343 -- # local lvs_uuid=150a576a-fa90-46e6-ab3c-ecb287e6d494 00:28:17.475 12:22:30 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:17.475 12:22:30 -- common/autotest_common.sh@1345 -- # local fc 00:28:17.475 12:22:30 -- common/autotest_common.sh@1346 -- # local cs 00:28:17.475 12:22:30 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:17.735 12:22:30 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:17.735 { 00:28:17.735 "uuid": "150a576a-fa90-46e6-ab3c-ecb287e6d494", 00:28:17.735 "name": "lvs_0", 00:28:17.735 "base_bdev": "Nvme0n1", 00:28:17.735 "total_data_clusters": 457407, 00:28:17.735 "free_clusters": 457407, 00:28:17.735 "block_size": 512, 00:28:17.735 "cluster_size": 4194304 00:28:17.735 } 00:28:17.735 ]' 00:28:17.735 12:22:30 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="150a576a-fa90-46e6-ab3c-ecb287e6d494") .free_clusters' 00:28:17.735 12:22:30 -- common/autotest_common.sh@1348 -- # fc=457407 00:28:17.735 12:22:30 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="150a576a-fa90-46e6-ab3c-ecb287e6d494") .cluster_size' 00:28:17.736 12:22:30 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:17.736 12:22:30 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:28:17.736 12:22:30 -- common/autotest_common.sh@1353 -- # echo 1829628 00:28:17.736 1829628 00:28:17.736 12:22:30 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:28:17.736 12:22:30 -- host/perf.sh@78 -- # free_mb=20480 00:28:17.736 12:22:30 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 150a576a-fa90-46e6-ab3c-ecb287e6d494 lbd_0 20480 00:28:17.996 12:22:30 -- host/perf.sh@80 -- # lb_guid=8f656054-d5c0-45f4-8919-b9549dc4f119 00:28:17.996 12:22:30 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8f656054-d5c0-45f4-8919-b9549dc4f119 lvs_n_0 00:28:19.909 12:22:32 -- host/perf.sh@83 -- # ls_nested_guid=ef3f594f-af52-445e-a2cb-08bc06966c02 00:28:19.909 12:22:32 -- host/perf.sh@84 -- # get_lvs_free_mb ef3f594f-af52-445e-a2cb-08bc06966c02 00:28:19.909 12:22:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ef3f594f-af52-445e-a2cb-08bc06966c02 00:28:19.909 12:22:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:19.909 12:22:32 -- common/autotest_common.sh@1345 -- # local fc 00:28:19.909 12:22:32 -- common/autotest_common.sh@1346 -- # local cs 00:28:19.909 12:22:32 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:19.909 12:22:32 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:19.909 { 00:28:19.909 "uuid": "150a576a-fa90-46e6-ab3c-ecb287e6d494", 00:28:19.909 "name": "lvs_0", 00:28:19.909 "base_bdev": "Nvme0n1", 00:28:19.909 "total_data_clusters": 457407, 00:28:19.909 "free_clusters": 452287, 00:28:19.909 "block_size": 512, 00:28:19.909 "cluster_size": 4194304 00:28:19.909 }, 00:28:19.909 { 00:28:19.909 "uuid": "ef3f594f-af52-445e-a2cb-08bc06966c02", 00:28:19.909 "name": "lvs_n_0", 00:28:19.909 "base_bdev": "8f656054-d5c0-45f4-8919-b9549dc4f119", 00:28:19.909 "total_data_clusters": 5114, 00:28:19.909 "free_clusters": 5114, 00:28:19.909 "block_size": 512, 00:28:19.909 "cluster_size": 4194304 00:28:19.909 } 00:28:19.909 ]' 00:28:19.909 12:22:32 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ef3f594f-af52-445e-a2cb-08bc06966c02") .free_clusters' 00:28:19.909 12:22:32 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:19.909 12:22:32 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ef3f594f-af52-445e-a2cb-08bc06966c02") .cluster_size' 00:28:19.909 12:22:32 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:19.909 12:22:32 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:19.909 12:22:32 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:19.909 20456 00:28:19.909 12:22:32 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:19.909 12:22:32 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ef3f594f-af52-445e-a2cb-08bc06966c02 lbd_nest_0 20456 00:28:19.909 12:22:32 -- host/perf.sh@88 -- # lb_nested_guid=701a01b7-9e91-4495-930b-e97fce24b824 00:28:19.909 12:22:32 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.170 12:22:33 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:20.170 12:22:33 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 701a01b7-9e91-4495-930b-e97fce24b824 00:28:20.430 12:22:33 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.430 12:22:33 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:20.430 12:22:33 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:20.430 12:22:33 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:20.430 12:22:33 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:20.430 12:22:33 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.430 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.695 Initializing NVMe Controllers 00:28:32.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.695 Initialization complete. Launching workers. 00:28:32.695 ======================================================== 00:28:32.695 Latency(us) 00:28:32.695 Device Information : IOPS MiB/s Average min max 00:28:32.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.70 0.02 21482.84 123.31 46078.64 00:28:32.695 ======================================================== 00:28:32.695 Total : 46.70 0.02 21482.84 123.31 46078.64 00:28:32.695 00:28:32.695 12:22:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:32.695 12:22:43 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:32.695 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.737 Initializing NVMe Controllers 00:28:42.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:42.737 Initialization complete. Launching workers. 00:28:42.737 ======================================================== 00:28:42.737 Latency(us) 00:28:42.737 Device Information : IOPS MiB/s Average min max 00:28:42.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.00 10.25 12213.64 7947.16 51877.83 00:28:42.737 ======================================================== 00:28:42.737 Total : 82.00 10.25 12213.64 7947.16 51877.83 00:28:42.737 00:28:42.737 12:22:54 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:42.737 12:22:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:42.737 12:22:54 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.737 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.737 Initializing NVMe Controllers 00:28:52.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:52.737 Initialization complete. Launching workers. 00:28:52.737 ======================================================== 00:28:52.737 Latency(us) 00:28:52.737 Device Information : IOPS MiB/s Average min max 00:28:52.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8803.54 4.30 3634.55 258.82 10302.17 00:28:52.737 ======================================================== 00:28:52.737 Total : 8803.54 4.30 3634.55 258.82 10302.17 00:28:52.737 00:28:52.737 12:23:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:52.737 12:23:04 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.737 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.742 Initializing NVMe Controllers 00:29:02.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:02.742 Initialization complete. Launching workers. 00:29:02.742 ======================================================== 00:29:02.742 Latency(us) 00:29:02.742 Device Information : IOPS MiB/s Average min max 00:29:02.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3890.90 486.36 8229.51 645.54 23857.00 00:29:02.742 ======================================================== 00:29:02.742 Total : 3890.90 486.36 8229.51 645.54 23857.00 00:29:02.742 00:29:02.742 12:23:14 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:02.742 12:23:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:02.742 12:23:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.742 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.741 Initializing NVMe Controllers 00:29:12.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.741 Controller IO queue size 128, less than required. 00:29:12.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:12.741 Initialization complete. Launching workers. 00:29:12.741 ======================================================== 00:29:12.741 Latency(us) 00:29:12.741 Device Information : IOPS MiB/s Average min max 00:29:12.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15810.70 7.72 8095.70 1923.72 25901.40 00:29:12.741 ======================================================== 00:29:12.741 Total : 15810.70 7.72 8095.70 1923.72 25901.40 00:29:12.741 00:29:12.741 12:23:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:12.742 12:23:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:12.742 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.734 Initializing NVMe Controllers 00:29:22.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.734 Controller IO queue size 128, less than required. 00:29:22.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:22.734 Initialization complete. Launching workers. 00:29:22.734 ======================================================== 00:29:22.734 Latency(us) 00:29:22.734 Device Information : IOPS MiB/s Average min max 00:29:22.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.38 148.92 108301.46 15822.56 227124.86 00:29:22.734 ======================================================== 00:29:22.734 Total : 1191.38 148.92 108301.46 15822.56 227124.86 00:29:22.734 00:29:22.734 12:23:35 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:22.735 12:23:35 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 701a01b7-9e91-4495-930b-e97fce24b824 00:29:24.646 12:23:37 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:24.646 12:23:37 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8f656054-d5c0-45f4-8919-b9549dc4f119 00:29:24.906 12:23:37 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:24.906 12:23:37 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:24.906 12:23:37 -- host/perf.sh@114 -- # nvmftestfini 00:29:24.906 12:23:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:24.906 12:23:37 -- nvmf/common.sh@116 -- # sync 00:29:24.906 12:23:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:24.906 12:23:37 -- nvmf/common.sh@119 -- # set +e 00:29:24.906 12:23:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:24.906 12:23:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:24.906 rmmod nvme_tcp 00:29:24.906 rmmod nvme_fabrics 00:29:24.906 rmmod nvme_keyring 00:29:24.906 12:23:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:24.906 12:23:37 -- nvmf/common.sh@123 -- # set -e 00:29:24.906 12:23:37 -- nvmf/common.sh@124 -- # return 0 00:29:24.906 12:23:37 -- nvmf/common.sh@477 -- # '[' -n 1633011 ']' 00:29:24.906 12:23:37 -- nvmf/common.sh@478 -- # killprocess 1633011 00:29:24.906 12:23:37 -- common/autotest_common.sh@926 -- # '[' -z 1633011 ']' 00:29:24.906 12:23:37 -- common/autotest_common.sh@930 -- # kill -0 1633011 00:29:24.906 12:23:37 -- common/autotest_common.sh@931 -- # uname 00:29:24.906 12:23:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.906 12:23:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1633011 00:29:25.168 12:23:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:25.168 12:23:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:25.168 12:23:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1633011' 00:29:25.168 killing process with pid 1633011 00:29:25.168 12:23:37 -- common/autotest_common.sh@945 -- # kill 1633011 00:29:25.168 12:23:37 -- common/autotest_common.sh@950 -- # wait 1633011 00:29:27.079 12:23:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:27.079 12:23:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:27.079 12:23:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:27.079 12:23:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.079 12:23:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:27.079 12:23:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.079 12:23:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.079 12:23:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.619 12:23:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:29.619 00:29:29.619 real 1m31.150s 00:29:29.619 user 5m25.108s 00:29:29.619 sys 0m13.978s 00:29:29.619 12:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.619 12:23:42 -- common/autotest_common.sh@10 -- # set +x 00:29:29.619 ************************************ 00:29:29.619 END TEST nvmf_perf 00:29:29.619 ************************************ 00:29:29.619 12:23:42 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:29.619 12:23:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:29.619 12:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:29.619 12:23:42 -- common/autotest_common.sh@10 -- # set +x 00:29:29.619 ************************************ 00:29:29.619 START TEST nvmf_fio_host 00:29:29.619 ************************************ 00:29:29.619 12:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:29.619 * Looking for test storage... 00:29:29.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.619 12:23:42 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.619 12:23:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.619 12:23:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.619 12:23:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.619 12:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.619 12:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.619 12:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.619 12:23:42 -- paths/export.sh@5 -- # export PATH 00:29:29.619 12:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.619 12:23:42 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.619 12:23:42 -- nvmf/common.sh@7 -- # uname -s 00:29:29.619 12:23:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.619 12:23:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.619 12:23:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.619 12:23:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.619 12:23:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.619 12:23:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.619 12:23:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.619 12:23:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.619 12:23:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.619 12:23:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.619 12:23:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.619 12:23:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.619 12:23:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.619 12:23:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.619 12:23:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.619 12:23:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.619 12:23:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.619 12:23:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.619 12:23:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.619 12:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.619 12:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.619 12:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.619 12:23:42 -- paths/export.sh@5 -- # export PATH 00:29:29.620 12:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.620 12:23:42 -- nvmf/common.sh@46 -- # : 0 00:29:29.620 12:23:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:29.620 12:23:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:29.620 12:23:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:29.620 12:23:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.620 12:23:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.620 12:23:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:29.620 12:23:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:29.620 12:23:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:29.620 12:23:42 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.620 12:23:42 -- host/fio.sh@14 -- # nvmftestinit 00:29:29.620 12:23:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:29.620 12:23:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.620 12:23:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:29.620 12:23:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:29.620 12:23:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:29.620 12:23:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.620 12:23:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.620 12:23:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.620 12:23:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:29.620 12:23:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:29.620 12:23:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:29.620 12:23:42 -- common/autotest_common.sh@10 -- # set +x 00:29:36.284 12:23:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:36.284 12:23:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:36.284 12:23:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:36.284 12:23:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:36.284 12:23:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:36.284 12:23:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:36.284 12:23:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:36.284 12:23:49 -- nvmf/common.sh@294 -- # net_devs=() 00:29:36.284 12:23:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:36.284 12:23:49 -- nvmf/common.sh@295 -- # e810=() 00:29:36.284 12:23:49 -- nvmf/common.sh@295 -- # local -ga e810 00:29:36.284 12:23:49 -- nvmf/common.sh@296 -- # x722=() 00:29:36.284 12:23:49 -- nvmf/common.sh@296 -- # local -ga x722 00:29:36.284 12:23:49 -- nvmf/common.sh@297 -- # mlx=() 00:29:36.284 12:23:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:36.284 12:23:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.284 12:23:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:36.284 12:23:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:36.284 12:23:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:36.284 12:23:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.284 12:23:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:36.284 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:36.284 12:23:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:36.284 12:23:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:36.284 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:36.284 12:23:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:36.284 12:23:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.284 12:23:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.284 12:23:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.284 12:23:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.284 12:23:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:36.284 Found net devices under 0000:31:00.0: cvl_0_0 00:29:36.284 12:23:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.284 12:23:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:36.284 12:23:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.284 12:23:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:36.284 12:23:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.284 12:23:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:36.284 Found net devices under 0000:31:00.1: cvl_0_1 00:29:36.284 12:23:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.284 12:23:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:36.284 12:23:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:36.284 12:23:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:36.284 12:23:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:36.284 12:23:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.284 12:23:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.284 12:23:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.284 12:23:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:36.284 12:23:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.284 12:23:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.284 12:23:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:36.284 12:23:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.284 12:23:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.284 12:23:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:36.284 12:23:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:36.284 12:23:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.284 12:23:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.545 12:23:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.545 12:23:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.545 12:23:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:36.545 12:23:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.545 12:23:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.545 12:23:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.545 12:23:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:36.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:29:36.545 00:29:36.545 --- 10.0.0.2 ping statistics --- 00:29:36.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.545 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:29:36.545 12:23:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:29:36.545 00:29:36.545 --- 10.0.0.1 ping statistics --- 00:29:36.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.545 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:29:36.545 12:23:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.545 12:23:49 -- nvmf/common.sh@410 -- # return 0 00:29:36.545 12:23:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:36.545 12:23:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.545 12:23:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:36.545 12:23:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:36.545 12:23:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.545 12:23:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:36.545 12:23:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:36.545 12:23:49 -- host/fio.sh@16 -- # [[ y != y ]] 00:29:36.545 12:23:49 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:36.545 12:23:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:36.545 12:23:49 -- common/autotest_common.sh@10 -- # set +x 00:29:36.545 12:23:49 -- host/fio.sh@24 -- # nvmfpid=1653148 00:29:36.545 12:23:49 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.545 12:23:49 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:36.545 12:23:49 -- host/fio.sh@28 -- # waitforlisten 1653148 00:29:36.546 12:23:49 -- common/autotest_common.sh@819 -- # '[' -z 1653148 ']' 00:29:36.546 12:23:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.546 12:23:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:36.546 12:23:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.546 12:23:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:36.546 12:23:49 -- common/autotest_common.sh@10 -- # set +x 00:29:36.806 [2024-06-11 12:23:49.603098] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:36.806 [2024-06-11 12:23:49.603163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.806 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.806 [2024-06-11 12:23:49.674960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.806 [2024-06-11 12:23:49.712666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:36.806 [2024-06-11 12:23:49.712814] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.806 [2024-06-11 12:23:49.712826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.806 [2024-06-11 12:23:49.712833] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.806 [2024-06-11 12:23:49.712979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.806 [2024-06-11 12:23:49.713124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.806 [2024-06-11 12:23:49.713479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.806 [2024-06-11 12:23:49.713480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.377 12:23:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:37.377 12:23:50 -- common/autotest_common.sh@852 -- # return 0 00:29:37.377 12:23:50 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:37.637 [2024-06-11 12:23:50.517653] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.637 12:23:50 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:37.637 12:23:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:37.637 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:29:37.637 12:23:50 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:37.898 Malloc1 00:29:37.898 12:23:50 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.898 12:23:50 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:38.158 12:23:51 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.418 [2024-06-11 12:23:51.223171] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.418 12:23:51 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.418 12:23:51 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:38.419 12:23:51 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:38.419 12:23:51 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:38.419 12:23:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:38.419 12:23:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:38.419 12:23:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:38.419 12:23:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.419 12:23:51 -- common/autotest_common.sh@1320 -- # shift 00:29:38.419 12:23:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:38.419 12:23:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.419 12:23:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.419 12:23:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:38.419 12:23:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:38.419 12:23:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:38.419 12:23:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:38.419 12:23:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.419 12:23:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.419 12:23:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:38.419 12:23:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:38.705 12:23:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:38.705 12:23:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:38.705 12:23:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:38.705 12:23:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:38.969 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:38.969 fio-3.35 00:29:38.969 Starting 1 thread 00:29:38.969 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.500 00:29:41.500 test: (groupid=0, jobs=1): err= 0: pid=1653710: Tue Jun 11 12:23:54 2024 00:29:41.500 read: IOPS=15.0k, BW=58.5MiB/s (61.3MB/s)(117MiB/2004msec) 00:29:41.500 slat (usec): min=2, max=310, avg= 2.17, stdev= 2.50 00:29:41.500 clat (usec): min=3149, max=8959, avg=4715.88, stdev=377.45 00:29:41.500 lat (usec): min=3152, max=8965, avg=4718.06, stdev=377.74 00:29:41.500 clat percentiles (usec): 00:29:41.500 | 1.00th=[ 3949], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4424], 00:29:41.500 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4817], 00:29:41.500 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:29:41.500 | 99.00th=[ 5604], 99.50th=[ 6325], 99.90th=[ 8160], 99.95th=[ 8455], 00:29:41.500 | 99.99th=[ 8848] 00:29:41.500 bw ( KiB/s): min=58728, max=60512, per=99.98%, avg=59886.00, stdev=836.69, samples=4 00:29:41.500 iops : min=14682, max=15128, avg=14971.50, stdev=209.17, samples=4 00:29:41.500 write: IOPS=15.0k, BW=58.5MiB/s (61.4MB/s)(117MiB/2004msec); 0 zone resets 00:29:41.500 slat (usec): min=2, max=270, avg= 2.25, stdev= 1.71 00:29:41.500 clat (usec): min=2624, max=7392, avg=3807.67, stdev=332.93 00:29:41.500 lat (usec): min=2627, max=7399, avg=3809.93, stdev=333.24 00:29:41.500 clat percentiles (usec): 00:29:41.500 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3589], 00:29:41.500 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3851], 00:29:41.500 | 70.00th=[ 3916], 80.00th=[ 4015], 90.00th=[ 4113], 95.00th=[ 4228], 00:29:41.500 | 99.00th=[ 4555], 99.50th=[ 5473], 99.90th=[ 7177], 99.95th=[ 7308], 00:29:41.500 | 99.99th=[ 7373] 00:29:41.500 bw ( KiB/s): min=59120, max=60640, per=100.00%, avg=59916.00, stdev=623.16, samples=4 00:29:41.500 iops : min=14780, max=15160, avg=14979.00, stdev=155.79, samples=4 00:29:41.500 lat (msec) : 4=40.42%, 10=59.58% 00:29:41.500 cpu : usr=75.14%, sys=23.41%, ctx=22, majf=0, minf=6 00:29:41.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:41.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:41.500 issued rwts: total=30009,30019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:41.500 00:29:41.500 Run status group 0 (all jobs): 00:29:41.500 READ: bw=58.5MiB/s (61.3MB/s), 58.5MiB/s-58.5MiB/s (61.3MB/s-61.3MB/s), io=117MiB (123MB), run=2004-2004msec 00:29:41.500 WRITE: bw=58.5MiB/s (61.4MB/s), 58.5MiB/s-58.5MiB/s (61.4MB/s-61.4MB/s), io=117MiB (123MB), run=2004-2004msec 00:29:41.500 12:23:54 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:41.500 12:23:54 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:41.500 12:23:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:41.500 12:23:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:41.500 12:23:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:41.500 12:23:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:41.500 12:23:54 -- common/autotest_common.sh@1320 -- # shift 00:29:41.500 12:23:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:41.500 12:23:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:41.500 12:23:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:41.500 12:23:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:41.500 12:23:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:41.500 12:23:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:41.500 12:23:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:41.501 12:23:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:41.501 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:41.501 fio-3.35 00:29:41.501 Starting 1 thread 00:29:41.501 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.034 00:29:44.034 test: (groupid=0, jobs=1): err= 0: pid=1654511: Tue Jun 11 12:23:56 2024 00:29:44.034 read: IOPS=9467, BW=148MiB/s (155MB/s)(297MiB/2006msec) 00:29:44.034 slat (usec): min=3, max=116, avg= 3.66, stdev= 1.84 00:29:44.034 clat (usec): min=2222, max=15730, avg=8217.60, stdev=2014.72 00:29:44.034 lat (usec): min=2225, max=15751, avg=8221.26, stdev=2014.94 00:29:44.034 clat percentiles (usec): 00:29:44.034 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6390], 00:29:44.034 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 8094], 60.00th=[ 8717], 00:29:44.034 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10814], 95.00th=[11338], 00:29:44.034 | 99.00th=[13042], 99.50th=[13566], 99.90th=[14353], 99.95th=[15270], 00:29:44.034 | 99.99th=[15664] 00:29:44.034 bw ( KiB/s): min=63328, max=93184, per=49.45%, avg=74904.00, stdev=12897.32, samples=4 00:29:44.034 iops : min= 3958, max= 5824, avg=4681.50, stdev=806.08, samples=4 00:29:44.034 write: IOPS=5664, BW=88.5MiB/s (92.8MB/s)(153MiB/1734msec); 0 zone resets 00:29:44.034 slat (usec): min=39, max=442, avg=41.19, stdev= 8.64 00:29:44.034 clat (usec): min=2312, max=17522, avg=9280.41, stdev=1608.44 00:29:44.034 lat (usec): min=2353, max=17658, avg=9321.60, stdev=1610.73 00:29:44.034 clat percentiles (usec): 00:29:44.034 | 1.00th=[ 5997], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 7963], 00:29:44.034 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:29:44.034 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11207], 95.00th=[11994], 00:29:44.034 | 99.00th=[13960], 99.50th=[14353], 99.90th=[16712], 99.95th=[17171], 00:29:44.034 | 99.99th=[17433] 00:29:44.034 bw ( KiB/s): min=65728, max=97280, per=86.14%, avg=78064.00, stdev=13564.52, samples=4 00:29:44.034 iops : min= 4108, max= 6080, avg=4879.00, stdev=847.78, samples=4 00:29:44.034 lat (msec) : 4=0.48%, 10=75.22%, 20=24.30% 00:29:44.034 cpu : usr=84.44%, sys=14.06%, ctx=15, majf=0, minf=33 00:29:44.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:44.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:44.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:44.034 issued rwts: total=18992,9822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:44.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:44.034 00:29:44.034 Run status group 0 (all jobs): 00:29:44.034 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=297MiB (311MB), run=2006-2006msec 00:29:44.034 WRITE: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=153MiB (161MB), run=1734-1734msec 00:29:44.034 12:23:56 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.034 12:23:56 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:44.034 12:23:56 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:44.034 12:23:56 -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:44.034 12:23:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:44.034 12:23:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:44.034 12:23:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:44.034 12:23:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:44.034 12:23:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:44.293 12:23:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:44.293 12:23:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:29:44.293 12:23:57 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:29:44.551 Nvme0n1 00:29:44.551 12:23:57 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:45.117 12:23:58 -- host/fio.sh@53 -- # ls_guid=559c3431-9d38-4fd0-9e52-114852ab276b 00:29:45.117 12:23:58 -- host/fio.sh@54 -- # get_lvs_free_mb 559c3431-9d38-4fd0-9e52-114852ab276b 00:29:45.117 12:23:58 -- common/autotest_common.sh@1343 -- # local lvs_uuid=559c3431-9d38-4fd0-9e52-114852ab276b 00:29:45.117 12:23:58 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:45.117 12:23:58 -- common/autotest_common.sh@1345 -- # local fc 00:29:45.117 12:23:58 -- common/autotest_common.sh@1346 -- # local cs 00:29:45.117 12:23:58 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:45.376 12:23:58 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:45.376 { 00:29:45.376 "uuid": "559c3431-9d38-4fd0-9e52-114852ab276b", 00:29:45.376 "name": "lvs_0", 00:29:45.376 "base_bdev": "Nvme0n1", 00:29:45.376 "total_data_clusters": 1787, 00:29:45.376 "free_clusters": 1787, 00:29:45.376 "block_size": 512, 00:29:45.376 "cluster_size": 1073741824 00:29:45.376 } 00:29:45.376 ]' 00:29:45.376 12:23:58 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="559c3431-9d38-4fd0-9e52-114852ab276b") .free_clusters' 00:29:45.376 12:23:58 -- common/autotest_common.sh@1348 -- # fc=1787 00:29:45.376 12:23:58 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="559c3431-9d38-4fd0-9e52-114852ab276b") .cluster_size' 00:29:45.376 12:23:58 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:29:45.376 12:23:58 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:29:45.376 12:23:58 -- common/autotest_common.sh@1353 -- # echo 1829888 00:29:45.376 1829888 00:29:45.376 12:23:58 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:29:45.634 8fc38cee-d859-4304-86f3-dc95d4e7c83d 00:29:45.634 12:23:58 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:45.634 12:23:58 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:45.892 12:23:58 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:46.151 12:23:58 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:46.151 12:23:58 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:46.151 12:23:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:46.151 12:23:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:46.151 12:23:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:46.151 12:23:58 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:46.151 12:23:58 -- common/autotest_common.sh@1320 -- # shift 00:29:46.151 12:23:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:46.151 12:23:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.151 12:23:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:46.151 12:23:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:46.151 12:23:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:46.151 12:23:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:46.151 12:23:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:46.151 12:23:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.151 12:23:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:46.151 12:23:58 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:46.151 12:23:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:46.151 12:23:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:46.151 12:23:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:46.151 12:23:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:46.151 12:23:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:46.410 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:46.410 fio-3.35 00:29:46.410 Starting 1 thread 00:29:46.410 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.951 00:29:48.951 test: (groupid=0, jobs=1): err= 0: pid=1655721: Tue Jun 11 12:24:01 2024 00:29:48.951 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(83.5MiB/2004msec) 00:29:48.951 slat (usec): min=2, max=109, avg= 2.21, stdev= 1.02 00:29:48.951 clat (usec): min=2491, max=10638, avg=6639.03, stdev=506.77 00:29:48.951 lat (usec): min=2502, max=10640, avg=6641.24, stdev=506.72 00:29:48.951 clat percentiles (usec): 00:29:48.951 | 1.00th=[ 5538], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:29:48.951 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:29:48.951 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:29:48.951 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[ 9372], 99.95th=[10028], 00:29:48.951 | 99.99th=[10552] 00:29:48.951 bw ( KiB/s): min=41576, max=43240, per=99.81%, avg=42606.00, stdev=738.93, samples=4 00:29:48.951 iops : min=10394, max=10810, avg=10651.50, stdev=184.73, samples=4 00:29:48.951 write: IOPS=10.7k, BW=41.6MiB/s (43.6MB/s)(83.4MiB/2004msec); 0 zone resets 00:29:48.951 slat (usec): min=2, max=508, avg= 2.33, stdev= 3.53 00:29:48.951 clat (usec): min=1259, max=9519, avg=5310.48, stdev=428.13 00:29:48.951 lat (usec): min=1266, max=9521, avg=5312.81, stdev=428.13 00:29:48.951 clat percentiles (usec): 00:29:48.951 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 4948], 00:29:48.951 | 30.00th=[ 5080], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5407], 00:29:48.951 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 5932], 00:29:48.951 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7308], 99.95th=[ 8225], 00:29:48.951 | 99.99th=[ 9372] 00:29:48.951 bw ( KiB/s): min=42136, max=43032, per=100.00%, avg=42620.00, stdev=369.07, samples=4 00:29:48.951 iops : min=10534, max=10758, avg=10655.00, stdev=92.27, samples=4 00:29:48.951 lat (msec) : 2=0.03%, 4=0.13%, 10=99.82%, 20=0.03% 00:29:48.951 cpu : usr=73.74%, sys=25.11%, ctx=27, majf=0, minf=15 00:29:48.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:48.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:48.951 issued rwts: total=21387,21353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:48.951 00:29:48.951 Run status group 0 (all jobs): 00:29:48.951 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=83.5MiB (87.6MB), run=2004-2004msec 00:29:48.951 WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=83.4MiB (87.5MB), run=2004-2004msec 00:29:48.951 12:24:01 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:49.211 12:24:02 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:49.778 12:24:02 -- host/fio.sh@64 -- # ls_nested_guid=f696c5c1-0693-494b-a618-195106bea9d1 00:29:49.778 12:24:02 -- host/fio.sh@65 -- # get_lvs_free_mb f696c5c1-0693-494b-a618-195106bea9d1 00:29:49.778 12:24:02 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f696c5c1-0693-494b-a618-195106bea9d1 00:29:49.778 12:24:02 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:49.778 12:24:02 -- common/autotest_common.sh@1345 -- # local fc 00:29:49.778 12:24:02 -- common/autotest_common.sh@1346 -- # local cs 00:29:49.778 12:24:02 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:50.037 12:24:02 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:50.037 { 00:29:50.037 "uuid": "559c3431-9d38-4fd0-9e52-114852ab276b", 00:29:50.037 "name": "lvs_0", 00:29:50.037 "base_bdev": "Nvme0n1", 00:29:50.037 "total_data_clusters": 1787, 00:29:50.037 "free_clusters": 0, 00:29:50.037 "block_size": 512, 00:29:50.037 "cluster_size": 1073741824 00:29:50.037 }, 00:29:50.037 { 00:29:50.037 "uuid": "f696c5c1-0693-494b-a618-195106bea9d1", 00:29:50.037 "name": "lvs_n_0", 00:29:50.037 "base_bdev": "8fc38cee-d859-4304-86f3-dc95d4e7c83d", 00:29:50.037 "total_data_clusters": 457025, 00:29:50.037 "free_clusters": 457025, 00:29:50.037 "block_size": 512, 00:29:50.037 "cluster_size": 4194304 00:29:50.037 } 00:29:50.037 ]' 00:29:50.037 12:24:02 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f696c5c1-0693-494b-a618-195106bea9d1") .free_clusters' 00:29:50.037 12:24:03 -- common/autotest_common.sh@1348 -- # fc=457025 00:29:50.037 12:24:03 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f696c5c1-0693-494b-a618-195106bea9d1") .cluster_size' 00:29:50.037 12:24:03 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:50.037 12:24:03 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:29:50.037 12:24:03 -- common/autotest_common.sh@1353 -- # echo 1828100 00:29:50.037 1828100 00:29:50.037 12:24:03 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:29:51.416 56d728a8-15aa-4f9d-8a91-b6fd9537f458 00:29:51.416 12:24:04 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:51.416 12:24:04 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:51.416 12:24:04 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:51.676 12:24:04 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.676 12:24:04 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.676 12:24:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:51.676 12:24:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.676 12:24:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:51.676 12:24:04 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.676 12:24:04 -- common/autotest_common.sh@1320 -- # shift 00:29:51.676 12:24:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:51.676 12:24:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:51.676 12:24:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:51.676 12:24:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:51.676 12:24:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:51.676 12:24:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:51.676 12:24:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:51.676 12:24:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.935 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:51.935 fio-3.35 00:29:51.935 Starting 1 thread 00:29:51.935 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.473 00:29:54.473 test: (groupid=0, jobs=1): err= 0: pid=1657016: Tue Jun 11 12:24:07 2024 00:29:54.473 read: IOPS=9745, BW=38.1MiB/s (39.9MB/s)(76.4MiB/2006msec) 00:29:54.473 slat (usec): min=2, max=113, avg= 2.20, stdev= 1.15 00:29:54.473 clat (usec): min=2056, max=11302, avg=7250.54, stdev=556.98 00:29:54.473 lat (usec): min=2073, max=11304, avg=7252.74, stdev=556.93 00:29:54.473 clat percentiles (usec): 00:29:54.473 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6783], 00:29:54.473 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:29:54.473 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8094], 00:29:54.473 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[ 9372], 99.95th=[ 9765], 00:29:54.473 | 99.99th=[11207] 00:29:54.473 bw ( KiB/s): min=37904, max=39608, per=99.96%, avg=38964.00, stdev=756.03, samples=4 00:29:54.473 iops : min= 9476, max= 9902, avg=9741.00, stdev=189.01, samples=4 00:29:54.473 write: IOPS=9754, BW=38.1MiB/s (40.0MB/s)(76.4MiB/2006msec); 0 zone resets 00:29:54.473 slat (nsec): min=2117, max=112626, avg=2292.15, stdev=845.21 00:29:54.473 clat (usec): min=1067, max=10512, avg=5786.11, stdev=490.10 00:29:54.473 lat (usec): min=1074, max=10514, avg=5788.40, stdev=490.07 00:29:54.473 clat percentiles (usec): 00:29:54.473 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5407], 00:29:54.473 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:29:54.473 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6325], 95.00th=[ 6521], 00:29:54.473 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8848], 99.95th=[ 9634], 00:29:54.473 | 99.99th=[10552] 00:29:54.473 bw ( KiB/s): min=38472, max=39496, per=99.99%, avg=39014.00, stdev=431.52, samples=4 00:29:54.473 iops : min= 9618, max= 9874, avg=9753.50, stdev=107.88, samples=4 00:29:54.473 lat (msec) : 2=0.01%, 4=0.12%, 10=99.84%, 20=0.03% 00:29:54.473 cpu : usr=71.72%, sys=27.13%, ctx=67, majf=0, minf=15 00:29:54.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:54.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:54.473 issued rwts: total=19549,19567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:54.473 00:29:54.473 Run status group 0 (all jobs): 00:29:54.473 READ: bw=38.1MiB/s (39.9MB/s), 38.1MiB/s-38.1MiB/s (39.9MB/s-39.9MB/s), io=76.4MiB (80.1MB), run=2006-2006msec 00:29:54.473 WRITE: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=76.4MiB (80.1MB), run=2006-2006msec 00:29:54.473 12:24:07 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:54.473 12:24:07 -- host/fio.sh@74 -- # sync 00:29:54.473 12:24:07 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:57.011 12:24:09 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:57.011 12:24:09 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:57.270 12:24:10 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:57.530 12:24:10 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:59.432 12:24:12 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:59.432 12:24:12 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:59.432 12:24:12 -- host/fio.sh@86 -- # nvmftestfini 00:29:59.432 12:24:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:59.432 12:24:12 -- nvmf/common.sh@116 -- # sync 00:29:59.432 12:24:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:59.432 12:24:12 -- nvmf/common.sh@119 -- # set +e 00:29:59.432 12:24:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:59.432 12:24:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:59.433 rmmod nvme_tcp 00:29:59.433 rmmod nvme_fabrics 00:29:59.433 rmmod nvme_keyring 00:29:59.433 12:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:59.433 12:24:12 -- nvmf/common.sh@123 -- # set -e 00:29:59.433 12:24:12 -- nvmf/common.sh@124 -- # return 0 00:29:59.433 12:24:12 -- nvmf/common.sh@477 -- # '[' -n 1653148 ']' 00:29:59.433 12:24:12 -- nvmf/common.sh@478 -- # killprocess 1653148 00:29:59.433 12:24:12 -- common/autotest_common.sh@926 -- # '[' -z 1653148 ']' 00:29:59.433 12:24:12 -- common/autotest_common.sh@930 -- # kill -0 1653148 00:29:59.433 12:24:12 -- common/autotest_common.sh@931 -- # uname 00:29:59.433 12:24:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:59.433 12:24:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1653148 00:29:59.433 12:24:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:59.433 12:24:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:59.433 12:24:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1653148' 00:29:59.433 killing process with pid 1653148 00:29:59.433 12:24:12 -- common/autotest_common.sh@945 -- # kill 1653148 00:29:59.433 12:24:12 -- common/autotest_common.sh@950 -- # wait 1653148 00:29:59.693 12:24:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:59.693 12:24:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:59.693 12:24:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:59.693 12:24:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:59.693 12:24:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:59.693 12:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.693 12:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.693 12:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.603 12:24:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:01.866 00:30:01.866 real 0m32.557s 00:30:01.866 user 2m40.191s 00:30:01.866 sys 0m9.676s 00:30:01.866 12:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.866 12:24:14 -- common/autotest_common.sh@10 -- # set +x 00:30:01.866 ************************************ 00:30:01.866 END TEST nvmf_fio_host 00:30:01.866 ************************************ 00:30:01.866 12:24:14 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:01.866 12:24:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:01.866 12:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:01.866 12:24:14 -- common/autotest_common.sh@10 -- # set +x 00:30:01.866 ************************************ 00:30:01.866 START TEST nvmf_failover 00:30:01.866 ************************************ 00:30:01.866 12:24:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:01.866 * Looking for test storage... 00:30:01.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.866 12:24:14 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.866 12:24:14 -- nvmf/common.sh@7 -- # uname -s 00:30:01.866 12:24:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.866 12:24:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.866 12:24:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.866 12:24:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.866 12:24:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.866 12:24:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.866 12:24:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.866 12:24:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.866 12:24:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.866 12:24:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.866 12:24:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.866 12:24:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.866 12:24:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.866 12:24:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.866 12:24:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.866 12:24:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.866 12:24:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.866 12:24:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.866 12:24:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.866 12:24:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.866 12:24:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.866 12:24:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.866 12:24:14 -- paths/export.sh@5 -- # export PATH 00:30:01.866 12:24:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.866 12:24:14 -- nvmf/common.sh@46 -- # : 0 00:30:01.866 12:24:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:01.866 12:24:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:01.866 12:24:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:01.866 12:24:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.866 12:24:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.866 12:24:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:01.866 12:24:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:01.866 12:24:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:01.866 12:24:14 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:01.866 12:24:14 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:01.866 12:24:14 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:01.866 12:24:14 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:01.866 12:24:14 -- host/failover.sh@18 -- # nvmftestinit 00:30:01.866 12:24:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:01.866 12:24:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.866 12:24:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:01.866 12:24:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:01.866 12:24:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:01.866 12:24:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.866 12:24:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.866 12:24:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.866 12:24:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:01.866 12:24:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:01.866 12:24:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:01.866 12:24:14 -- common/autotest_common.sh@10 -- # set +x 00:30:10.090 12:24:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:10.090 12:24:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:10.090 12:24:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:10.090 12:24:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:10.090 12:24:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:10.090 12:24:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:10.090 12:24:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:10.090 12:24:21 -- nvmf/common.sh@294 -- # net_devs=() 00:30:10.090 12:24:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:10.090 12:24:21 -- nvmf/common.sh@295 -- # e810=() 00:30:10.090 12:24:21 -- nvmf/common.sh@295 -- # local -ga e810 00:30:10.090 12:24:21 -- nvmf/common.sh@296 -- # x722=() 00:30:10.090 12:24:21 -- nvmf/common.sh@296 -- # local -ga x722 00:30:10.090 12:24:21 -- nvmf/common.sh@297 -- # mlx=() 00:30:10.090 12:24:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:10.090 12:24:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.090 12:24:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:10.090 12:24:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:10.090 12:24:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:10.090 12:24:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:10.090 12:24:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:10.090 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:10.090 12:24:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:10.090 12:24:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:10.090 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:10.090 12:24:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:10.090 12:24:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:10.091 12:24:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:10.091 12:24:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.091 12:24:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:10.091 12:24:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.091 12:24:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:10.091 Found net devices under 0000:31:00.0: cvl_0_0 00:30:10.091 12:24:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.091 12:24:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:10.091 12:24:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.091 12:24:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:10.091 12:24:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.091 12:24:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:10.091 Found net devices under 0000:31:00.1: cvl_0_1 00:30:10.091 12:24:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.091 12:24:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:10.091 12:24:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:10.091 12:24:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:10.091 12:24:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.091 12:24:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.091 12:24:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.091 12:24:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:10.091 12:24:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.091 12:24:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.091 12:24:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:10.091 12:24:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.091 12:24:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.091 12:24:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:10.091 12:24:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:10.091 12:24:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.091 12:24:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.091 12:24:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.091 12:24:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.091 12:24:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:10.091 12:24:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.091 12:24:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.091 12:24:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.091 12:24:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:10.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:30:10.091 00:30:10.091 --- 10.0.0.2 ping statistics --- 00:30:10.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.091 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:30:10.091 12:24:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:10.091 00:30:10.091 --- 10.0.0.1 ping statistics --- 00:30:10.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.091 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:10.091 12:24:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.091 12:24:21 -- nvmf/common.sh@410 -- # return 0 00:30:10.091 12:24:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:10.091 12:24:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.091 12:24:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:10.091 12:24:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.091 12:24:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:10.091 12:24:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:10.091 12:24:21 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:10.091 12:24:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:10.091 12:24:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:10.091 12:24:21 -- common/autotest_common.sh@10 -- # set +x 00:30:10.091 12:24:21 -- nvmf/common.sh@469 -- # nvmfpid=1663023 00:30:10.091 12:24:21 -- nvmf/common.sh@470 -- # waitforlisten 1663023 00:30:10.091 12:24:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:10.091 12:24:22 -- common/autotest_common.sh@819 -- # '[' -z 1663023 ']' 00:30:10.091 12:24:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.091 12:24:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:10.091 12:24:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.091 12:24:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:10.091 12:24:22 -- common/autotest_common.sh@10 -- # set +x 00:30:10.091 [2024-06-11 12:24:22.048727] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:10.091 [2024-06-11 12:24:22.048783] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.091 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.091 [2024-06-11 12:24:22.137570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:10.091 [2024-06-11 12:24:22.183174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:10.091 [2024-06-11 12:24:22.183332] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.091 [2024-06-11 12:24:22.183344] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.091 [2024-06-11 12:24:22.183353] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.091 [2024-06-11 12:24:22.183486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.091 [2024-06-11 12:24:22.183648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.091 [2024-06-11 12:24:22.183649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.091 12:24:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:10.091 12:24:22 -- common/autotest_common.sh@852 -- # return 0 00:30:10.091 12:24:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:10.091 12:24:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:10.091 12:24:22 -- common/autotest_common.sh@10 -- # set +x 00:30:10.091 12:24:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.091 12:24:22 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:10.091 [2024-06-11 12:24:22.987814] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.091 12:24:23 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:10.349 Malloc0 00:30:10.349 12:24:23 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.349 12:24:23 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:10.607 12:24:23 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.865 [2024-06-11 12:24:23.655177] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.865 12:24:23 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:10.865 [2024-06-11 12:24:23.811579] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:10.865 12:24:23 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:11.123 [2024-06-11 12:24:23.976109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:11.123 12:24:24 -- host/failover.sh@31 -- # bdevperf_pid=1663487 00:30:11.123 12:24:24 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.123 12:24:24 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:11.123 12:24:24 -- host/failover.sh@34 -- # waitforlisten 1663487 /var/tmp/bdevperf.sock 00:30:11.123 12:24:24 -- common/autotest_common.sh@819 -- # '[' -z 1663487 ']' 00:30:11.123 12:24:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.123 12:24:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:11.123 12:24:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.123 12:24:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:11.123 12:24:24 -- common/autotest_common.sh@10 -- # set +x 00:30:12.056 12:24:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:12.056 12:24:24 -- common/autotest_common.sh@852 -- # return 0 00:30:12.056 12:24:24 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.315 NVMe0n1 00:30:12.315 12:24:25 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.603 00:30:12.603 12:24:25 -- host/failover.sh@39 -- # run_test_pid=1663696 00:30:12.603 12:24:25 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:12.603 12:24:25 -- host/failover.sh@41 -- # sleep 1 00:30:13.538 12:24:26 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.797 [2024-06-11 12:24:26.585814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.585996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.797 [2024-06-11 12:24:26.586045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 [2024-06-11 12:24:26.586171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:30:13.798 12:24:26 -- host/failover.sh@45 -- # sleep 3 00:30:17.079 12:24:29 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.079 00:30:17.079 12:24:29 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:17.080 [2024-06-11 12:24:29.998564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 [2024-06-11 12:24:29.998831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b21a40 is same with the state(5) to be set 00:30:17.080 12:24:30 -- host/failover.sh@50 -- # sleep 3 00:30:20.361 12:24:33 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.361 [2024-06-11 12:24:33.173188] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.361 12:24:33 -- host/failover.sh@55 -- # sleep 1 00:30:21.293 12:24:34 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:21.551 [2024-06-11 12:24:34.345390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.551 [2024-06-11 12:24:34.345428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.551 [2024-06-11 12:24:34.345433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.551 [2024-06-11 12:24:34.345438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 [2024-06-11 12:24:34.345630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7cd0 is same with the state(5) to be set 00:30:21.552 12:24:34 -- host/failover.sh@59 -- # wait 1663696 00:30:28.124 0 00:30:28.124 12:24:40 -- host/failover.sh@61 -- # killprocess 1663487 00:30:28.124 12:24:40 -- common/autotest_common.sh@926 -- # '[' -z 1663487 ']' 00:30:28.124 12:24:40 -- common/autotest_common.sh@930 -- # kill -0 1663487 00:30:28.124 12:24:40 -- common/autotest_common.sh@931 -- # uname 00:30:28.124 12:24:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:28.124 12:24:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1663487 00:30:28.124 12:24:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:28.124 12:24:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:28.124 12:24:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1663487' 00:30:28.124 killing process with pid 1663487 00:30:28.124 12:24:40 -- common/autotest_common.sh@945 -- # kill 1663487 00:30:28.124 12:24:40 -- common/autotest_common.sh@950 -- # wait 1663487 00:30:28.124 12:24:40 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:28.124 [2024-06-11 12:24:24.035612] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:28.124 [2024-06-11 12:24:24.035667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663487 ] 00:30:28.124 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.124 [2024-06-11 12:24:24.094372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.124 [2024-06-11 12:24:24.123305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.124 Running I/O for 15 seconds... 00:30:28.124 [2024-06-11 12:24:26.586484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.124 [2024-06-11 12:24:26.586940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.124 [2024-06-11 12:24:26.586947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.586956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.586963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.586973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.586980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.586989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.586996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.125 [2024-06-11 12:24:26.587563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.125 [2024-06-11 12:24:26.587606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.125 [2024-06-11 12:24:26.587613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.587630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.587646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.587662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.587678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.587873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.587889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.587921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.587988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.587997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.126 [2024-06-11 12:24:26.588260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.126 [2024-06-11 12:24:26.588269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.126 [2024-06-11 12:24:26.588276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.127 [2024-06-11 12:24:26.588308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.127 [2024-06-11 12:24:26.588324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.127 [2024-06-11 12:24:26.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.127 [2024-06-11 12:24:26.588391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.127 [2024-06-11 12:24:26.588408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.127 [2024-06-11 12:24:26.588439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.127 [2024-06-11 12:24:26.588487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:26.588618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde38c0 is same with the state(5) to be set 00:30:28.127 [2024-06-11 12:24:26.588635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:28.127 [2024-06-11 12:24:26.588641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:28.127 [2024-06-11 12:24:26.588650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37944 len:8 PRP1 0x0 PRP2 0x0 00:30:28.127 [2024-06-11 12:24:26.588658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588694] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde38c0 was disconnected and freed. reset controller. 00:30:28.127 [2024-06-11 12:24:26.588707] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:28.127 [2024-06-11 12:24:26.588726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.127 [2024-06-11 12:24:26.588734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.127 [2024-06-11 12:24:26.588749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.127 [2024-06-11 12:24:26.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.127 [2024-06-11 12:24:26.588777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:26.588785] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:28.127 [2024-06-11 12:24:26.591122] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:28.127 [2024-06-11 12:24:26.591143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc4c10 (9): Bad file descriptor 00:30:28.127 [2024-06-11 12:24:26.628899] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:28.127 [2024-06-11 12:24:29.999791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.127 [2024-06-11 12:24:29.999988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.127 [2024-06-11 12:24:29.999997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.128 [2024-06-11 12:24:30.000388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.128 [2024-06-11 12:24:30.000398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.128 [2024-06-11 12:24:30.000406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.000845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.000987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.000994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.001003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.001009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.001024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.129 [2024-06-11 12:24:30.001032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.001041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.001047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.129 [2024-06-11 12:24:30.001057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.129 [2024-06-11 12:24:30.001064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.130 [2024-06-11 12:24:30.001707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.130 [2024-06-11 12:24:30.001717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.130 [2024-06-11 12:24:30.001723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.001734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.131 [2024-06-11 12:24:30.001742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.001751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:30.001757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.001766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:30.001773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.001782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.131 [2024-06-11 12:24:30.001790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.001799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.131 [2024-06-11 12:24:30.001806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.131 [2024-06-11 12:24:30.002296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:30.002312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.131 [2024-06-11 12:24:30.002328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:30.002344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.131 [2024-06-11 12:24:30.002360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:30.002378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:30.002396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:28.131 [2024-06-11 12:24:30.002431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:28.131 [2024-06-11 12:24:30.002438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67880 len:8 PRP1 0x0 PRP2 0x0 00:30:28.131 [2024-06-11 12:24:30.002446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002486] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdd1130 was disconnected and freed. reset controller. 00:30:28.131 [2024-06-11 12:24:30.002497] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:28.131 [2024-06-11 12:24:30.002518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.131 [2024-06-11 12:24:30.002527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.131 [2024-06-11 12:24:30.002544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.131 [2024-06-11 12:24:30.002560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.131 [2024-06-11 12:24:30.002576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:30.002583] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:28.131 [2024-06-11 12:24:30.004793] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:28.131 [2024-06-11 12:24:30.004814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc4c10 (9): Bad file descriptor 00:30:28.131 [2024-06-11 12:24:30.081355] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:28.131 [2024-06-11 12:24:34.346185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.131 [2024-06-11 12:24:34.346538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.131 [2024-06-11 12:24:34.346545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.346959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.346984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.346991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.347126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.347144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.132 [2024-06-11 12:24:34.347160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.347177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.347192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.132 [2024-06-11 12:24:34.347201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.132 [2024-06-11 12:24:34.347209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.133 [2024-06-11 12:24:34.347735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.133 [2024-06-11 12:24:34.347752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.133 [2024-06-11 12:24:34.347763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.347890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.347984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.347992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.134 [2024-06-11 12:24:34.348272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.134 [2024-06-11 12:24:34.348325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:28.134 [2024-06-11 12:24:34.348356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:28.134 [2024-06-11 12:24:34.348363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120776 len:8 PRP1 0x0 PRP2 0x0 00:30:28.134 [2024-06-11 12:24:34.348372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348410] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde7940 was disconnected and freed. reset controller. 00:30:28.134 [2024-06-11 12:24:34.348421] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:28.134 [2024-06-11 12:24:34.348442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.134 [2024-06-11 12:24:34.348453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.134 [2024-06-11 12:24:34.348471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.134 [2024-06-11 12:24:34.348488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.134 [2024-06-11 12:24:34.348496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.134 [2024-06-11 12:24:34.348503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.135 [2024-06-11 12:24:34.348511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:28.135 [2024-06-11 12:24:34.350707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:28.135 [2024-06-11 12:24:34.350730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc4c10 (9): Bad file descriptor 00:30:28.135 [2024-06-11 12:24:34.392471] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:28.135 00:30:28.135 Latency(us) 00:30:28.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.135 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:28.135 Verification LBA range: start 0x0 length 0x4000 00:30:28.135 NVMe0n1 : 15.01 19746.45 77.13 557.47 0.00 6288.09 522.24 12997.97 00:30:28.135 =================================================================================================================== 00:30:28.135 Total : 19746.45 77.13 557.47 0.00 6288.09 522.24 12997.97 00:30:28.135 Received shutdown signal, test time was about 15.000000 seconds 00:30:28.135 00:30:28.135 Latency(us) 00:30:28.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.135 =================================================================================================================== 00:30:28.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.135 12:24:40 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:28.135 12:24:40 -- host/failover.sh@65 -- # count=3 00:30:28.135 12:24:40 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:28.135 12:24:40 -- host/failover.sh@73 -- # bdevperf_pid=1666648 00:30:28.135 12:24:40 -- host/failover.sh@75 -- # waitforlisten 1666648 /var/tmp/bdevperf.sock 00:30:28.135 12:24:40 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:28.135 12:24:40 -- common/autotest_common.sh@819 -- # '[' -z 1666648 ']' 00:30:28.135 12:24:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.135 12:24:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:28.135 12:24:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.135 12:24:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:28.135 12:24:40 -- common/autotest_common.sh@10 -- # set +x 00:30:28.701 12:24:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:28.701 12:24:41 -- common/autotest_common.sh@852 -- # return 0 00:30:28.701 12:24:41 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:28.701 [2024-06-11 12:24:41.708947] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:28.959 12:24:41 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:28.959 [2024-06-11 12:24:41.873349] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:28.959 12:24:41 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.221 NVMe0n1 00:30:29.221 12:24:42 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.791 00:30:29.791 12:24:42 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.791 00:30:30.048 12:24:42 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:30.048 12:24:42 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:30.048 12:24:42 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.306 12:24:43 -- host/failover.sh@87 -- # sleep 3 00:30:33.591 12:24:46 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:33.591 12:24:46 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:33.591 12:24:46 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.591 12:24:46 -- host/failover.sh@90 -- # run_test_pid=1667703 00:30:33.591 12:24:46 -- host/failover.sh@92 -- # wait 1667703 00:30:34.529 0 00:30:34.529 12:24:47 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:34.529 [2024-06-11 12:24:40.807642] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:34.529 [2024-06-11 12:24:40.807697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666648 ] 00:30:34.529 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.529 [2024-06-11 12:24:40.867487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.529 [2024-06-11 12:24:40.895965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.529 [2024-06-11 12:24:43.131347] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:34.529 [2024-06-11 12:24:43.131391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.529 [2024-06-11 12:24:43.131402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.529 [2024-06-11 12:24:43.131410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.529 [2024-06-11 12:24:43.131418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.529 [2024-06-11 12:24:43.131425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.529 [2024-06-11 12:24:43.131432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.529 [2024-06-11 12:24:43.131440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.529 [2024-06-11 12:24:43.131447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.529 [2024-06-11 12:24:43.131454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:34.529 [2024-06-11 12:24:43.131478] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:34.529 [2024-06-11 12:24:43.131492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1052c10 (9): Bad file descriptor 00:30:34.529 [2024-06-11 12:24:43.264093] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:34.529 Running I/O for 1 seconds... 00:30:34.529 00:30:34.529 Latency(us) 00:30:34.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.529 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:34.529 Verification LBA range: start 0x0 length 0x4000 00:30:34.529 NVMe0n1 : 1.00 19938.53 77.88 0.00 0.00 6390.99 1099.09 9065.81 00:30:34.529 =================================================================================================================== 00:30:34.529 Total : 19938.53 77.88 0.00 0.00 6390.99 1099.09 9065.81 00:30:34.529 12:24:47 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:34.529 12:24:47 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:34.788 12:24:47 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:34.788 12:24:47 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:34.788 12:24:47 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:35.047 12:24:47 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:35.047 12:24:48 -- host/failover.sh@101 -- # sleep 3 00:30:38.335 12:24:51 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:38.335 12:24:51 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:38.335 12:24:51 -- host/failover.sh@108 -- # killprocess 1666648 00:30:38.335 12:24:51 -- common/autotest_common.sh@926 -- # '[' -z 1666648 ']' 00:30:38.335 12:24:51 -- common/autotest_common.sh@930 -- # kill -0 1666648 00:30:38.335 12:24:51 -- common/autotest_common.sh@931 -- # uname 00:30:38.335 12:24:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:38.335 12:24:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1666648 00:30:38.335 12:24:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:38.335 12:24:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:38.335 12:24:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1666648' 00:30:38.335 killing process with pid 1666648 00:30:38.335 12:24:51 -- common/autotest_common.sh@945 -- # kill 1666648 00:30:38.335 12:24:51 -- common/autotest_common.sh@950 -- # wait 1666648 00:30:38.594 12:24:51 -- host/failover.sh@110 -- # sync 00:30:38.594 12:24:51 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:38.594 12:24:51 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:38.594 12:24:51 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:38.594 12:24:51 -- host/failover.sh@116 -- # nvmftestfini 00:30:38.594 12:24:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:38.594 12:24:51 -- nvmf/common.sh@116 -- # sync 00:30:38.594 12:24:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:38.594 12:24:51 -- nvmf/common.sh@119 -- # set +e 00:30:38.594 12:24:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:38.594 12:24:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:38.594 rmmod nvme_tcp 00:30:38.594 rmmod nvme_fabrics 00:30:38.853 rmmod nvme_keyring 00:30:38.853 12:24:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:38.853 12:24:51 -- nvmf/common.sh@123 -- # set -e 00:30:38.853 12:24:51 -- nvmf/common.sh@124 -- # return 0 00:30:38.853 12:24:51 -- nvmf/common.sh@477 -- # '[' -n 1663023 ']' 00:30:38.853 12:24:51 -- nvmf/common.sh@478 -- # killprocess 1663023 00:30:38.853 12:24:51 -- common/autotest_common.sh@926 -- # '[' -z 1663023 ']' 00:30:38.853 12:24:51 -- common/autotest_common.sh@930 -- # kill -0 1663023 00:30:38.853 12:24:51 -- common/autotest_common.sh@931 -- # uname 00:30:38.853 12:24:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:38.853 12:24:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1663023 00:30:38.853 12:24:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:38.853 12:24:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:38.853 12:24:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1663023' 00:30:38.853 killing process with pid 1663023 00:30:38.853 12:24:51 -- common/autotest_common.sh@945 -- # kill 1663023 00:30:38.853 12:24:51 -- common/autotest_common.sh@950 -- # wait 1663023 00:30:38.853 12:24:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:38.853 12:24:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:38.853 12:24:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:38.853 12:24:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.853 12:24:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:38.853 12:24:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.853 12:24:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.853 12:24:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.421 12:24:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:41.421 00:30:41.421 real 0m39.231s 00:30:41.421 user 2m1.101s 00:30:41.421 sys 0m7.919s 00:30:41.421 12:24:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.421 12:24:53 -- common/autotest_common.sh@10 -- # set +x 00:30:41.421 ************************************ 00:30:41.421 END TEST nvmf_failover 00:30:41.421 ************************************ 00:30:41.421 12:24:53 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:41.421 12:24:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:41.421 12:24:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:41.421 12:24:53 -- common/autotest_common.sh@10 -- # set +x 00:30:41.421 ************************************ 00:30:41.421 START TEST nvmf_discovery 00:30:41.421 ************************************ 00:30:41.421 12:24:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:41.421 * Looking for test storage... 00:30:41.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.421 12:24:54 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.421 12:24:54 -- nvmf/common.sh@7 -- # uname -s 00:30:41.421 12:24:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.421 12:24:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.421 12:24:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.421 12:24:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.421 12:24:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.421 12:24:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.421 12:24:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.421 12:24:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.421 12:24:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.421 12:24:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.421 12:24:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:41.421 12:24:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:41.421 12:24:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.421 12:24:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.421 12:24:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.421 12:24:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.421 12:24:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.421 12:24:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.421 12:24:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.421 12:24:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.421 12:24:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.421 12:24:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.421 12:24:54 -- paths/export.sh@5 -- # export PATH 00:30:41.421 12:24:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.421 12:24:54 -- nvmf/common.sh@46 -- # : 0 00:30:41.421 12:24:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:41.421 12:24:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:41.421 12:24:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:41.421 12:24:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.421 12:24:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.421 12:24:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:41.421 12:24:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:41.421 12:24:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:41.421 12:24:54 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:41.421 12:24:54 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:41.421 12:24:54 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:41.421 12:24:54 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:41.421 12:24:54 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:41.421 12:24:54 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:41.421 12:24:54 -- host/discovery.sh@25 -- # nvmftestinit 00:30:41.421 12:24:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:41.421 12:24:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.421 12:24:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:41.422 12:24:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:41.422 12:24:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:41.422 12:24:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.422 12:24:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.422 12:24:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.422 12:24:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:41.422 12:24:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:41.422 12:24:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:41.422 12:24:54 -- common/autotest_common.sh@10 -- # set +x 00:30:48.052 12:25:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:48.052 12:25:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:48.052 12:25:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:48.052 12:25:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:48.052 12:25:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:48.052 12:25:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:48.052 12:25:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:48.052 12:25:00 -- nvmf/common.sh@294 -- # net_devs=() 00:30:48.052 12:25:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:48.052 12:25:00 -- nvmf/common.sh@295 -- # e810=() 00:30:48.052 12:25:00 -- nvmf/common.sh@295 -- # local -ga e810 00:30:48.052 12:25:00 -- nvmf/common.sh@296 -- # x722=() 00:30:48.052 12:25:00 -- nvmf/common.sh@296 -- # local -ga x722 00:30:48.052 12:25:00 -- nvmf/common.sh@297 -- # mlx=() 00:30:48.052 12:25:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:48.052 12:25:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.052 12:25:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:48.052 12:25:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:48.052 12:25:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:48.052 12:25:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:48.052 12:25:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:48.052 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:48.052 12:25:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:48.052 12:25:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:48.052 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:48.052 12:25:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:48.052 12:25:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:48.052 12:25:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.052 12:25:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:48.052 12:25:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.052 12:25:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:48.052 Found net devices under 0000:31:00.0: cvl_0_0 00:30:48.052 12:25:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.052 12:25:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:48.052 12:25:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.052 12:25:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:48.052 12:25:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.052 12:25:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:48.052 Found net devices under 0000:31:00.1: cvl_0_1 00:30:48.052 12:25:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.052 12:25:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:48.052 12:25:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:48.052 12:25:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:48.052 12:25:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:48.052 12:25:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.052 12:25:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.052 12:25:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.052 12:25:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:48.052 12:25:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.052 12:25:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.052 12:25:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:48.052 12:25:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.052 12:25:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.052 12:25:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:48.052 12:25:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:48.052 12:25:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.052 12:25:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.052 12:25:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.312 12:25:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.312 12:25:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:48.312 12:25:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.312 12:25:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.312 12:25:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.312 12:25:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:48.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:30:48.312 00:30:48.312 --- 10.0.0.2 ping statistics --- 00:30:48.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.312 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:30:48.312 12:25:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:30:48.312 00:30:48.312 --- 10.0.0.1 ping statistics --- 00:30:48.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.312 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:30:48.312 12:25:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.312 12:25:01 -- nvmf/common.sh@410 -- # return 0 00:30:48.312 12:25:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:48.312 12:25:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.312 12:25:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:48.312 12:25:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:48.312 12:25:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.312 12:25:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:48.312 12:25:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:48.312 12:25:01 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:48.312 12:25:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:48.312 12:25:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:48.312 12:25:01 -- common/autotest_common.sh@10 -- # set +x 00:30:48.312 12:25:01 -- nvmf/common.sh@469 -- # nvmfpid=1673074 00:30:48.312 12:25:01 -- nvmf/common.sh@470 -- # waitforlisten 1673074 00:30:48.312 12:25:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:48.312 12:25:01 -- common/autotest_common.sh@819 -- # '[' -z 1673074 ']' 00:30:48.312 12:25:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.312 12:25:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:48.312 12:25:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.312 12:25:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:48.313 12:25:01 -- common/autotest_common.sh@10 -- # set +x 00:30:48.313 [2024-06-11 12:25:01.328555] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:48.313 [2024-06-11 12:25:01.328621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.572 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.572 [2024-06-11 12:25:01.414436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.572 [2024-06-11 12:25:01.458742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:48.572 [2024-06-11 12:25:01.458887] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.572 [2024-06-11 12:25:01.458896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.572 [2024-06-11 12:25:01.458905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.572 [2024-06-11 12:25:01.458928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.142 12:25:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:49.142 12:25:02 -- common/autotest_common.sh@852 -- # return 0 00:30:49.142 12:25:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:49.142 12:25:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:49.142 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.142 12:25:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.142 12:25:02 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.142 12:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.142 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.142 [2024-06-11 12:25:02.147533] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.142 12:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.142 12:25:02 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:49.142 12:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.142 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.142 [2024-06-11 12:25:02.159736] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:49.142 12:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.142 12:25:02 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:49.142 12:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.142 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.142 null0 00:30:49.142 12:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.401 12:25:02 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:49.401 12:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.401 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.401 null1 00:30:49.401 12:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.401 12:25:02 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:49.401 12:25:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.401 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.401 12:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.401 12:25:02 -- host/discovery.sh@45 -- # hostpid=1673130 00:30:49.401 12:25:02 -- host/discovery.sh@46 -- # waitforlisten 1673130 /tmp/host.sock 00:30:49.401 12:25:02 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:49.401 12:25:02 -- common/autotest_common.sh@819 -- # '[' -z 1673130 ']' 00:30:49.401 12:25:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:49.401 12:25:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:49.401 12:25:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:49.401 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:49.401 12:25:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:49.401 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.401 [2024-06-11 12:25:02.253057] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:49.401 [2024-06-11 12:25:02.253137] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673130 ] 00:30:49.401 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.401 [2024-06-11 12:25:02.322779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.401 [2024-06-11 12:25:02.359332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:49.401 [2024-06-11 12:25:02.359492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.341 12:25:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:50.341 12:25:03 -- common/autotest_common.sh@852 -- # return 0 00:30:50.341 12:25:03 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:50.341 12:25:03 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@72 -- # notify_id=0 00:30:50.341 12:25:03 -- host/discovery.sh@78 -- # get_subsystem_names 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # sort 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # xargs 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:30:50.341 12:25:03 -- host/discovery.sh@79 -- # get_bdev_list 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # sort 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # xargs 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:30:50.341 12:25:03 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@82 -- # get_subsystem_names 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # sort 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # xargs 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:30:50.341 12:25:03 -- host/discovery.sh@83 -- # get_bdev_list 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # sort 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # xargs 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:50.341 12:25:03 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@86 -- # get_subsystem_names 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # sort 00:30:50.341 12:25:03 -- host/discovery.sh@59 -- # xargs 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:30:50.341 12:25:03 -- host/discovery.sh@87 -- # get_bdev_list 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # sort 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 12:25:03 -- host/discovery.sh@55 -- # xargs 00:30:50.341 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.341 12:25:03 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:50.341 12:25:03 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:50.341 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.341 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.341 [2024-06-11 12:25:03.374887] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.601 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.601 12:25:03 -- host/discovery.sh@92 -- # get_subsystem_names 00:30:50.601 12:25:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.601 12:25:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.601 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.601 12:25:03 -- host/discovery.sh@59 -- # sort 00:30:50.601 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.601 12:25:03 -- host/discovery.sh@59 -- # xargs 00:30:50.601 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.601 12:25:03 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:50.601 12:25:03 -- host/discovery.sh@93 -- # get_bdev_list 00:30:50.601 12:25:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.601 12:25:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.601 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.601 12:25:03 -- host/discovery.sh@55 -- # sort 00:30:50.601 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.601 12:25:03 -- host/discovery.sh@55 -- # xargs 00:30:50.601 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.601 12:25:03 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:30:50.601 12:25:03 -- host/discovery.sh@94 -- # get_notification_count 00:30:50.601 12:25:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:50.601 12:25:03 -- host/discovery.sh@74 -- # jq '. | length' 00:30:50.601 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.601 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.601 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.601 12:25:03 -- host/discovery.sh@74 -- # notification_count=0 00:30:50.601 12:25:03 -- host/discovery.sh@75 -- # notify_id=0 00:30:50.601 12:25:03 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:30:50.601 12:25:03 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:50.601 12:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.601 12:25:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.601 12:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.601 12:25:03 -- host/discovery.sh@100 -- # sleep 1 00:30:51.170 [2024-06-11 12:25:04.085965] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:51.170 [2024-06-11 12:25:04.085987] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:51.170 [2024-06-11 12:25:04.085999] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.170 [2024-06-11 12:25:04.173277] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:51.430 [2024-06-11 12:25:04.359051] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:51.430 [2024-06-11 12:25:04.359072] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:51.689 12:25:04 -- host/discovery.sh@101 -- # get_subsystem_names 00:30:51.689 12:25:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:51.689 12:25:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:51.689 12:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.689 12:25:04 -- host/discovery.sh@59 -- # sort 00:30:51.689 12:25:04 -- common/autotest_common.sh@10 -- # set +x 00:30:51.689 12:25:04 -- host/discovery.sh@59 -- # xargs 00:30:51.689 12:25:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.689 12:25:04 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.689 12:25:04 -- host/discovery.sh@102 -- # get_bdev_list 00:30:51.689 12:25:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.689 12:25:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.689 12:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.689 12:25:04 -- common/autotest_common.sh@10 -- # set +x 00:30:51.689 12:25:04 -- host/discovery.sh@55 -- # sort 00:30:51.689 12:25:04 -- host/discovery.sh@55 -- # xargs 00:30:51.689 12:25:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.689 12:25:04 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:51.689 12:25:04 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:30:51.689 12:25:04 -- host/discovery.sh@63 -- # xargs 00:30:51.689 12:25:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:51.689 12:25:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:51.689 12:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.689 12:25:04 -- host/discovery.sh@63 -- # sort -n 00:30:51.689 12:25:04 -- common/autotest_common.sh@10 -- # set +x 00:30:51.689 12:25:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.689 12:25:04 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:30:51.689 12:25:04 -- host/discovery.sh@104 -- # get_notification_count 00:30:51.689 12:25:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:51.689 12:25:04 -- host/discovery.sh@74 -- # jq '. | length' 00:30:51.689 12:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.689 12:25:04 -- common/autotest_common.sh@10 -- # set +x 00:30:51.689 12:25:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.949 12:25:04 -- host/discovery.sh@74 -- # notification_count=1 00:30:51.949 12:25:04 -- host/discovery.sh@75 -- # notify_id=1 00:30:51.949 12:25:04 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:30:51.949 12:25:04 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:51.949 12:25:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.949 12:25:04 -- common/autotest_common.sh@10 -- # set +x 00:30:51.949 12:25:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.949 12:25:04 -- host/discovery.sh@109 -- # sleep 1 00:30:52.886 12:25:05 -- host/discovery.sh@110 -- # get_bdev_list 00:30:52.886 12:25:05 -- host/discovery.sh@55 -- # sort 00:30:52.886 12:25:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.886 12:25:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:52.886 12:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.886 12:25:05 -- common/autotest_common.sh@10 -- # set +x 00:30:52.886 12:25:05 -- host/discovery.sh@55 -- # xargs 00:30:52.886 12:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.886 12:25:05 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:52.886 12:25:05 -- host/discovery.sh@111 -- # get_notification_count 00:30:52.886 12:25:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:52.886 12:25:05 -- host/discovery.sh@74 -- # jq '. | length' 00:30:52.886 12:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.886 12:25:05 -- common/autotest_common.sh@10 -- # set +x 00:30:52.886 12:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.886 12:25:05 -- host/discovery.sh@74 -- # notification_count=1 00:30:52.886 12:25:05 -- host/discovery.sh@75 -- # notify_id=2 00:30:52.886 12:25:05 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:30:52.886 12:25:05 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:52.886 12:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.886 12:25:05 -- common/autotest_common.sh@10 -- # set +x 00:30:52.886 [2024-06-11 12:25:05.853393] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:52.886 [2024-06-11 12:25:05.853587] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:52.886 [2024-06-11 12:25:05.853611] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:52.886 12:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.886 12:25:05 -- host/discovery.sh@117 -- # sleep 1 00:30:53.146 [2024-06-11 12:25:05.939868] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:53.405 [2024-06-11 12:25:06.209161] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:53.405 [2024-06-11 12:25:06.209178] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:53.405 [2024-06-11 12:25:06.209184] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:53.975 12:25:06 -- host/discovery.sh@118 -- # get_subsystem_names 00:30:53.975 12:25:06 -- host/discovery.sh@59 -- # sort 00:30:53.975 12:25:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:53.975 12:25:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:53.975 12:25:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.975 12:25:06 -- common/autotest_common.sh@10 -- # set +x 00:30:53.975 12:25:06 -- host/discovery.sh@59 -- # xargs 00:30:53.975 12:25:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.975 12:25:06 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.975 12:25:06 -- host/discovery.sh@119 -- # get_bdev_list 00:30:53.975 12:25:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.975 12:25:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.975 12:25:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.975 12:25:06 -- common/autotest_common.sh@10 -- # set +x 00:30:53.975 12:25:06 -- host/discovery.sh@55 -- # sort 00:30:53.975 12:25:06 -- host/discovery.sh@55 -- # xargs 00:30:53.975 12:25:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.975 12:25:06 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:53.975 12:25:06 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:30:53.975 12:25:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:53.975 12:25:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:53.975 12:25:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.975 12:25:06 -- common/autotest_common.sh@10 -- # set +x 00:30:53.975 12:25:06 -- host/discovery.sh@63 -- # sort -n 00:30:53.975 12:25:06 -- host/discovery.sh@63 -- # xargs 00:30:53.975 12:25:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.235 12:25:07 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:54.235 12:25:07 -- host/discovery.sh@121 -- # get_notification_count 00:30:54.235 12:25:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:54.235 12:25:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.235 12:25:07 -- common/autotest_common.sh@10 -- # set +x 00:30:54.235 12:25:07 -- host/discovery.sh@74 -- # jq '. | length' 00:30:54.235 12:25:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.235 12:25:07 -- host/discovery.sh@74 -- # notification_count=0 00:30:54.235 12:25:07 -- host/discovery.sh@75 -- # notify_id=2 00:30:54.235 12:25:07 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:30:54.235 12:25:07 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:54.235 12:25:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.235 12:25:07 -- common/autotest_common.sh@10 -- # set +x 00:30:54.235 [2024-06-11 12:25:07.073002] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:54.235 [2024-06-11 12:25:07.073026] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:54.235 [2024-06-11 12:25:07.073073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.235 [2024-06-11 12:25:07.073089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.235 [2024-06-11 12:25:07.073097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.235 [2024-06-11 12:25:07.073105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.235 [2024-06-11 12:25:07.073112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.235 [2024-06-11 12:25:07.073119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.235 [2024-06-11 12:25:07.073127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.235 [2024-06-11 12:25:07.073134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.235 [2024-06-11 12:25:07.073141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.235 12:25:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.235 12:25:07 -- host/discovery.sh@127 -- # sleep 1 00:30:54.235 [2024-06-11 12:25:07.083084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.235 [2024-06-11 12:25:07.093125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.235 [2024-06-11 12:25:07.093465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.235 [2024-06-11 12:25:07.093803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.235 [2024-06-11 12:25:07.093814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762fe0 with addr=10.0.0.2, port=4420 00:30:54.235 [2024-06-11 12:25:07.093822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.235 [2024-06-11 12:25:07.093834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.235 [2024-06-11 12:25:07.093844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.235 [2024-06-11 12:25:07.093851] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.235 [2024-06-11 12:25:07.093859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.235 [2024-06-11 12:25:07.093870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.235 [2024-06-11 12:25:07.103180] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.235 [2024-06-11 12:25:07.103489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.235 [2024-06-11 12:25:07.103798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.235 [2024-06-11 12:25:07.103808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762fe0 with addr=10.0.0.2, port=4420 00:30:54.236 [2024-06-11 12:25:07.103816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.236 [2024-06-11 12:25:07.103827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.236 [2024-06-11 12:25:07.103837] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.236 [2024-06-11 12:25:07.103843] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.236 [2024-06-11 12:25:07.103853] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.236 [2024-06-11 12:25:07.103864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.236 [2024-06-11 12:25:07.113230] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.236 [2024-06-11 12:25:07.113572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.113764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.113775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762fe0 with addr=10.0.0.2, port=4420 00:30:54.236 [2024-06-11 12:25:07.113783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.236 [2024-06-11 12:25:07.113794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.236 [2024-06-11 12:25:07.113804] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.236 [2024-06-11 12:25:07.113811] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.236 [2024-06-11 12:25:07.113819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.236 [2024-06-11 12:25:07.113830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.236 [2024-06-11 12:25:07.123281] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.236 [2024-06-11 12:25:07.123630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.123794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.123804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762fe0 with addr=10.0.0.2, port=4420 00:30:54.236 [2024-06-11 12:25:07.123811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.236 [2024-06-11 12:25:07.123823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.236 [2024-06-11 12:25:07.123833] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.236 [2024-06-11 12:25:07.123839] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.236 [2024-06-11 12:25:07.123846] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.236 [2024-06-11 12:25:07.123857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.236 [2024-06-11 12:25:07.133334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.236 [2024-06-11 12:25:07.133674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.134033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.134044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762fe0 with addr=10.0.0.2, port=4420 00:30:54.236 [2024-06-11 12:25:07.134051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.236 [2024-06-11 12:25:07.134062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.236 [2024-06-11 12:25:07.134072] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.236 [2024-06-11 12:25:07.134078] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.236 [2024-06-11 12:25:07.134085] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.236 [2024-06-11 12:25:07.134099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.236 [2024-06-11 12:25:07.143385] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.236 [2024-06-11 12:25:07.143733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.144076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.144087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762fe0 with addr=10.0.0.2, port=4420 00:30:54.236 [2024-06-11 12:25:07.144094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.236 [2024-06-11 12:25:07.144105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.236 [2024-06-11 12:25:07.144115] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.236 [2024-06-11 12:25:07.144121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.236 [2024-06-11 12:25:07.144128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.236 [2024-06-11 12:25:07.144138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.236 [2024-06-11 12:25:07.153436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:54.236 [2024-06-11 12:25:07.153751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.154086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.236 [2024-06-11 12:25:07.154097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762fe0 with addr=10.0.0.2, port=4420 00:30:54.236 [2024-06-11 12:25:07.154104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762fe0 is same with the state(5) to be set 00:30:54.236 [2024-06-11 12:25:07.154115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762fe0 (9): Bad file descriptor 00:30:54.236 [2024-06-11 12:25:07.154125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:54.236 [2024-06-11 12:25:07.154131] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:54.236 [2024-06-11 12:25:07.154138] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:54.236 [2024-06-11 12:25:07.154148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:54.236 [2024-06-11 12:25:07.160292] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:54.236 [2024-06-11 12:25:07.160310] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:55.174 12:25:08 -- host/discovery.sh@128 -- # get_subsystem_names 00:30:55.174 12:25:08 -- host/discovery.sh@59 -- # xargs 00:30:55.174 12:25:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:55.174 12:25:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:55.174 12:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.174 12:25:08 -- host/discovery.sh@59 -- # sort 00:30:55.174 12:25:08 -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 12:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.174 12:25:08 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.174 12:25:08 -- host/discovery.sh@129 -- # get_bdev_list 00:30:55.174 12:25:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.174 12:25:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:55.174 12:25:08 -- host/discovery.sh@55 -- # sort 00:30:55.174 12:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.174 12:25:08 -- host/discovery.sh@55 -- # xargs 00:30:55.174 12:25:08 -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 12:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.174 12:25:08 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:55.174 12:25:08 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:30:55.174 12:25:08 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:55.174 12:25:08 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:55.174 12:25:08 -- host/discovery.sh@63 -- # sort -n 00:30:55.174 12:25:08 -- host/discovery.sh@63 -- # xargs 00:30:55.174 12:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.174 12:25:08 -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 12:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.434 12:25:08 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:30:55.434 12:25:08 -- host/discovery.sh@131 -- # get_notification_count 00:30:55.434 12:25:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:55.434 12:25:08 -- host/discovery.sh@74 -- # jq '. | length' 00:30:55.434 12:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.434 12:25:08 -- common/autotest_common.sh@10 -- # set +x 00:30:55.434 12:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.434 12:25:08 -- host/discovery.sh@74 -- # notification_count=0 00:30:55.434 12:25:08 -- host/discovery.sh@75 -- # notify_id=2 00:30:55.434 12:25:08 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:30:55.434 12:25:08 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:55.434 12:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.434 12:25:08 -- common/autotest_common.sh@10 -- # set +x 00:30:55.434 12:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.434 12:25:08 -- host/discovery.sh@135 -- # sleep 1 00:30:56.372 12:25:09 -- host/discovery.sh@136 -- # get_subsystem_names 00:30:56.372 12:25:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:56.372 12:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.372 12:25:09 -- common/autotest_common.sh@10 -- # set +x 00:30:56.372 12:25:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:56.372 12:25:09 -- host/discovery.sh@59 -- # sort 00:30:56.372 12:25:09 -- host/discovery.sh@59 -- # xargs 00:30:56.372 12:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.372 12:25:09 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:30:56.372 12:25:09 -- host/discovery.sh@137 -- # get_bdev_list 00:30:56.372 12:25:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.372 12:25:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.372 12:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.372 12:25:09 -- common/autotest_common.sh@10 -- # set +x 00:30:56.372 12:25:09 -- host/discovery.sh@55 -- # sort 00:30:56.372 12:25:09 -- host/discovery.sh@55 -- # xargs 00:30:56.372 12:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.372 12:25:09 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:30:56.372 12:25:09 -- host/discovery.sh@138 -- # get_notification_count 00:30:56.372 12:25:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:56.372 12:25:09 -- host/discovery.sh@74 -- # jq '. | length' 00:30:56.372 12:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.372 12:25:09 -- common/autotest_common.sh@10 -- # set +x 00:30:56.631 12:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.631 12:25:09 -- host/discovery.sh@74 -- # notification_count=2 00:30:56.631 12:25:09 -- host/discovery.sh@75 -- # notify_id=4 00:30:56.631 12:25:09 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:30:56.631 12:25:09 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.631 12:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.631 12:25:09 -- common/autotest_common.sh@10 -- # set +x 00:30:57.570 [2024-06-11 12:25:10.503980] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:57.570 [2024-06-11 12:25:10.504002] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:57.570 [2024-06-11 12:25:10.504015] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:57.570 [2024-06-11 12:25:10.592298] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:57.830 [2024-06-11 12:25:10.697217] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:57.830 [2024-06-11 12:25:10.697255] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:57.830 12:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.830 12:25:10 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:57.830 12:25:10 -- common/autotest_common.sh@640 -- # local es=0 00:30:57.830 12:25:10 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:57.830 12:25:10 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:57.830 12:25:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:57.830 12:25:10 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:57.830 12:25:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:57.830 12:25:10 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:57.830 12:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.830 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:30:57.830 request: 00:30:57.830 { 00:30:57.830 "name": "nvme", 00:30:57.830 "trtype": "tcp", 00:30:57.830 "traddr": "10.0.0.2", 00:30:57.830 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:57.830 "adrfam": "ipv4", 00:30:57.830 "trsvcid": "8009", 00:30:57.830 "wait_for_attach": true, 00:30:57.830 "method": "bdev_nvme_start_discovery", 00:30:57.830 "req_id": 1 00:30:57.830 } 00:30:57.830 Got JSON-RPC error response 00:30:57.830 response: 00:30:57.830 { 00:30:57.830 "code": -17, 00:30:57.830 "message": "File exists" 00:30:57.830 } 00:30:57.830 12:25:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:57.830 12:25:10 -- common/autotest_common.sh@643 -- # es=1 00:30:57.830 12:25:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:57.830 12:25:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:57.830 12:25:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:57.830 12:25:10 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:57.830 12:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.830 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # sort 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # xargs 00:30:57.830 12:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.830 12:25:10 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:30:57.830 12:25:10 -- host/discovery.sh@147 -- # get_bdev_list 00:30:57.830 12:25:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.830 12:25:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:57.830 12:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.830 12:25:10 -- host/discovery.sh@55 -- # sort 00:30:57.830 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:30:57.830 12:25:10 -- host/discovery.sh@55 -- # xargs 00:30:57.830 12:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.830 12:25:10 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:57.830 12:25:10 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:57.830 12:25:10 -- common/autotest_common.sh@640 -- # local es=0 00:30:57.830 12:25:10 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:57.830 12:25:10 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:57.830 12:25:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:57.830 12:25:10 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:57.830 12:25:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:57.830 12:25:10 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:57.830 12:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.830 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:30:57.830 request: 00:30:57.830 { 00:30:57.830 "name": "nvme_second", 00:30:57.830 "trtype": "tcp", 00:30:57.830 "traddr": "10.0.0.2", 00:30:57.830 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:57.830 "adrfam": "ipv4", 00:30:57.830 "trsvcid": "8009", 00:30:57.830 "wait_for_attach": true, 00:30:57.830 "method": "bdev_nvme_start_discovery", 00:30:57.830 "req_id": 1 00:30:57.830 } 00:30:57.830 Got JSON-RPC error response 00:30:57.830 response: 00:30:57.830 { 00:30:57.830 "code": -17, 00:30:57.830 "message": "File exists" 00:30:57.830 } 00:30:57.830 12:25:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:57.830 12:25:10 -- common/autotest_common.sh@643 -- # es=1 00:30:57.830 12:25:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:57.830 12:25:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:57.830 12:25:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:57.830 12:25:10 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:57.830 12:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # sort 00:30:57.830 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:30:57.830 12:25:10 -- host/discovery.sh@67 -- # xargs 00:30:57.830 12:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.090 12:25:10 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:30:58.090 12:25:10 -- host/discovery.sh@153 -- # get_bdev_list 00:30:58.090 12:25:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.090 12:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.090 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:30:58.090 12:25:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:58.090 12:25:10 -- host/discovery.sh@55 -- # sort 00:30:58.090 12:25:10 -- host/discovery.sh@55 -- # xargs 00:30:58.090 12:25:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.090 12:25:10 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:58.090 12:25:10 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:58.090 12:25:10 -- common/autotest_common.sh@640 -- # local es=0 00:30:58.090 12:25:10 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:58.090 12:25:10 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:58.090 12:25:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:58.090 12:25:10 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:58.090 12:25:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:58.090 12:25:10 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:58.090 12:25:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.090 12:25:10 -- common/autotest_common.sh@10 -- # set +x 00:30:59.026 [2024-06-11 12:25:11.964741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.027 [2024-06-11 12:25:11.965070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.027 [2024-06-11 12:25:11.965084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75f560 with addr=10.0.0.2, port=8010 00:30:59.027 [2024-06-11 12:25:11.965096] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:59.027 [2024-06-11 12:25:11.965102] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:59.027 [2024-06-11 12:25:11.965110] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:59.970 [2024-06-11 12:25:12.967026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.970 [2024-06-11 12:25:12.967330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.970 [2024-06-11 12:25:12.967344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75f560 with addr=10.0.0.2, port=8010 00:30:59.970 [2024-06-11 12:25:12.967355] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:59.970 [2024-06-11 12:25:12.967362] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:59.970 [2024-06-11 12:25:12.967369] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:01.349 [2024-06-11 12:25:13.969074] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:01.349 request: 00:31:01.349 { 00:31:01.349 "name": "nvme_second", 00:31:01.349 "trtype": "tcp", 00:31:01.349 "traddr": "10.0.0.2", 00:31:01.349 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:01.349 "adrfam": "ipv4", 00:31:01.349 "trsvcid": "8010", 00:31:01.349 "attach_timeout_ms": 3000, 00:31:01.349 "method": "bdev_nvme_start_discovery", 00:31:01.349 "req_id": 1 00:31:01.349 } 00:31:01.349 Got JSON-RPC error response 00:31:01.349 response: 00:31:01.349 { 00:31:01.349 "code": -110, 00:31:01.349 "message": "Connection timed out" 00:31:01.349 } 00:31:01.349 12:25:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:01.349 12:25:13 -- common/autotest_common.sh@643 -- # es=1 00:31:01.349 12:25:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:01.349 12:25:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:01.349 12:25:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:01.349 12:25:13 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:01.349 12:25:13 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:01.349 12:25:13 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:01.349 12:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.349 12:25:13 -- host/discovery.sh@67 -- # sort 00:31:01.349 12:25:13 -- common/autotest_common.sh@10 -- # set +x 00:31:01.349 12:25:13 -- host/discovery.sh@67 -- # xargs 00:31:01.349 12:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.349 12:25:14 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:01.349 12:25:14 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:01.349 12:25:14 -- host/discovery.sh@162 -- # kill 1673130 00:31:01.349 12:25:14 -- host/discovery.sh@163 -- # nvmftestfini 00:31:01.349 12:25:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:01.349 12:25:14 -- nvmf/common.sh@116 -- # sync 00:31:01.349 12:25:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:01.349 12:25:14 -- nvmf/common.sh@119 -- # set +e 00:31:01.349 12:25:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:01.349 12:25:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:01.349 rmmod nvme_tcp 00:31:01.349 rmmod nvme_fabrics 00:31:01.349 rmmod nvme_keyring 00:31:01.349 12:25:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:01.349 12:25:14 -- nvmf/common.sh@123 -- # set -e 00:31:01.349 12:25:14 -- nvmf/common.sh@124 -- # return 0 00:31:01.349 12:25:14 -- nvmf/common.sh@477 -- # '[' -n 1673074 ']' 00:31:01.349 12:25:14 -- nvmf/common.sh@478 -- # killprocess 1673074 00:31:01.349 12:25:14 -- common/autotest_common.sh@926 -- # '[' -z 1673074 ']' 00:31:01.349 12:25:14 -- common/autotest_common.sh@930 -- # kill -0 1673074 00:31:01.349 12:25:14 -- common/autotest_common.sh@931 -- # uname 00:31:01.349 12:25:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:01.349 12:25:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1673074 00:31:01.349 12:25:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:01.349 12:25:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:01.349 12:25:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1673074' 00:31:01.349 killing process with pid 1673074 00:31:01.349 12:25:14 -- common/autotest_common.sh@945 -- # kill 1673074 00:31:01.349 12:25:14 -- common/autotest_common.sh@950 -- # wait 1673074 00:31:01.349 12:25:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:01.349 12:25:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:01.349 12:25:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:01.349 12:25:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:01.349 12:25:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:01.349 12:25:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.349 12:25:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.349 12:25:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.889 12:25:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:03.889 00:31:03.889 real 0m22.360s 00:31:03.889 user 0m28.397s 00:31:03.889 sys 0m6.624s 00:31:03.889 12:25:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:03.889 12:25:16 -- common/autotest_common.sh@10 -- # set +x 00:31:03.889 ************************************ 00:31:03.889 END TEST nvmf_discovery 00:31:03.889 ************************************ 00:31:03.889 12:25:16 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:03.889 12:25:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:03.889 12:25:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:03.889 12:25:16 -- common/autotest_common.sh@10 -- # set +x 00:31:03.889 ************************************ 00:31:03.889 START TEST nvmf_discovery_remove_ifc 00:31:03.889 ************************************ 00:31:03.889 12:25:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:03.889 * Looking for test storage... 00:31:03.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:03.889 12:25:16 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.889 12:25:16 -- nvmf/common.sh@7 -- # uname -s 00:31:03.889 12:25:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.889 12:25:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.889 12:25:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.889 12:25:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.889 12:25:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.889 12:25:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.889 12:25:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.889 12:25:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.889 12:25:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.889 12:25:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.889 12:25:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:03.889 12:25:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:03.889 12:25:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.889 12:25:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.889 12:25:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.889 12:25:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.889 12:25:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.889 12:25:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.889 12:25:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.890 12:25:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.890 12:25:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.890 12:25:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.890 12:25:16 -- paths/export.sh@5 -- # export PATH 00:31:03.890 12:25:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.890 12:25:16 -- nvmf/common.sh@46 -- # : 0 00:31:03.890 12:25:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:03.890 12:25:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:03.890 12:25:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:03.890 12:25:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.890 12:25:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.890 12:25:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:03.890 12:25:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:03.890 12:25:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:03.890 12:25:16 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:03.890 12:25:16 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:03.890 12:25:16 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:03.890 12:25:16 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:03.890 12:25:16 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:03.890 12:25:16 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:03.890 12:25:16 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:03.890 12:25:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:03.890 12:25:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.890 12:25:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:03.890 12:25:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:03.890 12:25:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:03.890 12:25:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.890 12:25:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.890 12:25:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.890 12:25:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:03.890 12:25:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:03.890 12:25:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:03.890 12:25:16 -- common/autotest_common.sh@10 -- # set +x 00:31:10.485 12:25:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:10.485 12:25:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:10.485 12:25:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:10.485 12:25:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:10.485 12:25:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:10.485 12:25:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:10.485 12:25:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:10.485 12:25:23 -- nvmf/common.sh@294 -- # net_devs=() 00:31:10.485 12:25:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:10.485 12:25:23 -- nvmf/common.sh@295 -- # e810=() 00:31:10.485 12:25:23 -- nvmf/common.sh@295 -- # local -ga e810 00:31:10.485 12:25:23 -- nvmf/common.sh@296 -- # x722=() 00:31:10.485 12:25:23 -- nvmf/common.sh@296 -- # local -ga x722 00:31:10.485 12:25:23 -- nvmf/common.sh@297 -- # mlx=() 00:31:10.485 12:25:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:10.485 12:25:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.485 12:25:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:10.485 12:25:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:10.485 12:25:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:10.485 12:25:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:10.485 12:25:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:10.485 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:10.485 12:25:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:10.485 12:25:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:10.485 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:10.485 12:25:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:10.485 12:25:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:10.485 12:25:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.485 12:25:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:10.485 12:25:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.485 12:25:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:10.485 Found net devices under 0000:31:00.0: cvl_0_0 00:31:10.485 12:25:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.485 12:25:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:10.485 12:25:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.485 12:25:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:10.485 12:25:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.485 12:25:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:10.485 Found net devices under 0000:31:00.1: cvl_0_1 00:31:10.485 12:25:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.485 12:25:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:10.485 12:25:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:10.485 12:25:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:10.485 12:25:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.485 12:25:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.485 12:25:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.485 12:25:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:10.485 12:25:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.485 12:25:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.485 12:25:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:10.485 12:25:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.485 12:25:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.485 12:25:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:10.485 12:25:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:10.485 12:25:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.485 12:25:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.485 12:25:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.485 12:25:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.485 12:25:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:10.485 12:25:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.485 12:25:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.485 12:25:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.485 12:25:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:10.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:31:10.485 00:31:10.485 --- 10.0.0.2 ping statistics --- 00:31:10.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.485 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:31:10.485 12:25:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:31:10.485 00:31:10.485 --- 10.0.0.1 ping statistics --- 00:31:10.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.485 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:31:10.485 12:25:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.485 12:25:23 -- nvmf/common.sh@410 -- # return 0 00:31:10.485 12:25:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:10.485 12:25:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.485 12:25:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:10.485 12:25:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.485 12:25:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:10.485 12:25:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:10.485 12:25:23 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:10.485 12:25:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:10.485 12:25:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:10.485 12:25:23 -- common/autotest_common.sh@10 -- # set +x 00:31:10.485 12:25:23 -- nvmf/common.sh@469 -- # nvmfpid=1679747 00:31:10.485 12:25:23 -- nvmf/common.sh@470 -- # waitforlisten 1679747 00:31:10.485 12:25:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:10.485 12:25:23 -- common/autotest_common.sh@819 -- # '[' -z 1679747 ']' 00:31:10.485 12:25:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.485 12:25:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:10.485 12:25:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.485 12:25:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:10.485 12:25:23 -- common/autotest_common.sh@10 -- # set +x 00:31:10.485 [2024-06-11 12:25:23.447244] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:10.485 [2024-06-11 12:25:23.447313] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.485 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.745 [2024-06-11 12:25:23.536498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.745 [2024-06-11 12:25:23.580454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:10.745 [2024-06-11 12:25:23.580589] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.745 [2024-06-11 12:25:23.580597] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.745 [2024-06-11 12:25:23.580605] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.745 [2024-06-11 12:25:23.580629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.315 12:25:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:11.315 12:25:24 -- common/autotest_common.sh@852 -- # return 0 00:31:11.315 12:25:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:11.315 12:25:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:11.315 12:25:24 -- common/autotest_common.sh@10 -- # set +x 00:31:11.315 12:25:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.315 12:25:24 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:11.315 12:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.315 12:25:24 -- common/autotest_common.sh@10 -- # set +x 00:31:11.315 [2024-06-11 12:25:24.280155] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.316 [2024-06-11 12:25:24.288359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:11.316 null0 00:31:11.316 [2024-06-11 12:25:24.320341] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.316 12:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.316 12:25:24 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1680017 00:31:11.316 12:25:24 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1680017 /tmp/host.sock 00:31:11.316 12:25:24 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:11.316 12:25:24 -- common/autotest_common.sh@819 -- # '[' -z 1680017 ']' 00:31:11.316 12:25:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:11.316 12:25:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:11.316 12:25:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:11.316 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:11.316 12:25:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:11.316 12:25:24 -- common/autotest_common.sh@10 -- # set +x 00:31:11.575 [2024-06-11 12:25:24.401011] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:11.575 [2024-06-11 12:25:24.401082] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1680017 ] 00:31:11.575 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.575 [2024-06-11 12:25:24.466323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.575 [2024-06-11 12:25:24.503587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:11.575 [2024-06-11 12:25:24.503745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.144 12:25:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:12.144 12:25:25 -- common/autotest_common.sh@852 -- # return 0 00:31:12.144 12:25:25 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:12.144 12:25:25 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:12.144 12:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.144 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.144 12:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.144 12:25:25 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:12.144 12:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.144 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.404 12:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.404 12:25:25 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:12.404 12:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.404 12:25:25 -- common/autotest_common.sh@10 -- # set +x 00:31:13.346 [2024-06-11 12:25:26.228552] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:13.346 [2024-06-11 12:25:26.228571] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:13.346 [2024-06-11 12:25:26.228584] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.346 [2024-06-11 12:25:26.357996] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:13.607 [2024-06-11 12:25:26.539713] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:13.607 [2024-06-11 12:25:26.539756] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:13.607 [2024-06-11 12:25:26.539776] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:13.607 [2024-06-11 12:25:26.539790] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.607 [2024-06-11 12:25:26.539810] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:13.607 12:25:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.607 12:25:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.607 12:25:26 -- common/autotest_common.sh@10 -- # set +x 00:31:13.607 [2024-06-11 12:25:26.547868] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd1f870 was disconnected and freed. delete nvme_qpair. 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.607 12:25:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:13.607 12:25:26 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:13.866 12:25:26 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:13.866 12:25:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:13.866 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:13.866 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.866 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:13.866 12:25:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.866 12:25:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:13.866 12:25:26 -- common/autotest_common.sh@10 -- # set +x 00:31:13.867 12:25:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.867 12:25:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:13.867 12:25:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:14.805 12:25:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.806 12:25:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.806 12:25:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.806 12:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.806 12:25:27 -- common/autotest_common.sh@10 -- # set +x 00:31:14.806 12:25:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.806 12:25:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.806 12:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.806 12:25:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:14.806 12:25:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:16.188 12:25:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.188 12:25:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.188 12:25:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.188 12:25:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.188 12:25:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.188 12:25:28 -- common/autotest_common.sh@10 -- # set +x 00:31:16.188 12:25:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.188 12:25:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.188 12:25:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:16.188 12:25:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:17.128 12:25:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:17.128 12:25:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.128 12:25:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:17.128 12:25:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.128 12:25:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:17.128 12:25:29 -- common/autotest_common.sh@10 -- # set +x 00:31:17.128 12:25:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:17.128 12:25:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.128 12:25:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:17.128 12:25:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.067 12:25:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.067 12:25:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.067 12:25:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.067 12:25:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.067 12:25:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.067 12:25:30 -- common/autotest_common.sh@10 -- # set +x 00:31:18.067 12:25:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.067 12:25:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.067 12:25:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:18.067 12:25:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.007 [2024-06-11 12:25:31.980271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:19.007 [2024-06-11 12:25:31.980324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.007 [2024-06-11 12:25:31.980335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.007 [2024-06-11 12:25:31.980345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.007 [2024-06-11 12:25:31.980352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.007 [2024-06-11 12:25:31.980361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.007 [2024-06-11 12:25:31.980368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.007 [2024-06-11 12:25:31.980376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.007 [2024-06-11 12:25:31.980383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.007 [2024-06-11 12:25:31.980391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.007 [2024-06-11 12:25:31.980399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.007 [2024-06-11 12:25:31.980406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5cd0 is same with the state(5) to be set 00:31:19.007 [2024-06-11 12:25:31.990290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5cd0 (9): Bad file descriptor 00:31:19.007 [2024-06-11 12:25:32.000336] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:19.007 12:25:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.007 12:25:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.007 12:25:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.007 12:25:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.007 12:25:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.007 12:25:32 -- common/autotest_common.sh@10 -- # set +x 00:31:19.007 12:25:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:20.382 [2024-06-11 12:25:33.025089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:21.320 [2024-06-11 12:25:34.049058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:21.320 [2024-06-11 12:25:34.049099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce5cd0 with addr=10.0.0.2, port=4420 00:31:21.320 [2024-06-11 12:25:34.049111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce5cd0 is same with the state(5) to be set 00:31:21.320 [2024-06-11 12:25:34.049454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5cd0 (9): Bad file descriptor 00:31:21.320 [2024-06-11 12:25:34.049477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:21.320 [2024-06-11 12:25:34.049498] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:21.320 [2024-06-11 12:25:34.049522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.320 [2024-06-11 12:25:34.049532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.320 [2024-06-11 12:25:34.049542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.320 [2024-06-11 12:25:34.049550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.320 [2024-06-11 12:25:34.049563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.320 [2024-06-11 12:25:34.049571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.320 [2024-06-11 12:25:34.049579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.320 [2024-06-11 12:25:34.049586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.320 [2024-06-11 12:25:34.049595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:21.320 [2024-06-11 12:25:34.049602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:21.320 [2024-06-11 12:25:34.049610] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:21.320 [2024-06-11 12:25:34.050133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce60e0 (9): Bad file descriptor 00:31:21.320 [2024-06-11 12:25:34.051144] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:21.320 [2024-06-11 12:25:34.051156] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:21.320 12:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.320 12:25:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:21.320 12:25:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:22.257 12:25:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:22.257 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.257 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:22.257 12:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.257 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:22.258 12:25:35 -- common/autotest_common.sh@10 -- # set +x 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:22.258 12:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:22.258 12:25:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:22.258 12:25:35 -- common/autotest_common.sh@10 -- # set +x 00:31:22.258 12:25:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:22.258 12:25:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:23.195 [2024-06-11 12:25:36.103668] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:23.195 [2024-06-11 12:25:36.103688] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:23.195 [2024-06-11 12:25:36.103701] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:23.454 [2024-06-11 12:25:36.234112] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:23.454 12:25:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.454 12:25:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.454 12:25:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.454 12:25:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.454 12:25:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.454 12:25:36 -- common/autotest_common.sh@10 -- # set +x 00:31:23.454 12:25:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.454 12:25:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.454 12:25:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:23.455 12:25:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:23.455 [2024-06-11 12:25:36.457412] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:23.455 [2024-06-11 12:25:36.457451] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:23.455 [2024-06-11 12:25:36.457470] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:23.455 [2024-06-11 12:25:36.457484] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:23.455 [2024-06-11 12:25:36.457492] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:23.455 [2024-06-11 12:25:36.462272] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd2a110 was disconnected and freed. delete nvme_qpair. 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.395 12:25:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.395 12:25:37 -- common/autotest_common.sh@10 -- # set +x 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.395 12:25:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:24.395 12:25:37 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1680017 00:31:24.395 12:25:37 -- common/autotest_common.sh@926 -- # '[' -z 1680017 ']' 00:31:24.395 12:25:37 -- common/autotest_common.sh@930 -- # kill -0 1680017 00:31:24.395 12:25:37 -- common/autotest_common.sh@931 -- # uname 00:31:24.395 12:25:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:24.395 12:25:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1680017 00:31:24.655 12:25:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:24.655 12:25:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:24.655 12:25:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1680017' 00:31:24.655 killing process with pid 1680017 00:31:24.655 12:25:37 -- common/autotest_common.sh@945 -- # kill 1680017 00:31:24.655 12:25:37 -- common/autotest_common.sh@950 -- # wait 1680017 00:31:24.655 12:25:37 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:24.655 12:25:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:24.655 12:25:37 -- nvmf/common.sh@116 -- # sync 00:31:24.655 12:25:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:24.655 12:25:37 -- nvmf/common.sh@119 -- # set +e 00:31:24.655 12:25:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:24.655 12:25:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:24.655 rmmod nvme_tcp 00:31:24.655 rmmod nvme_fabrics 00:31:24.655 rmmod nvme_keyring 00:31:24.655 12:25:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:24.655 12:25:37 -- nvmf/common.sh@123 -- # set -e 00:31:24.655 12:25:37 -- nvmf/common.sh@124 -- # return 0 00:31:24.655 12:25:37 -- nvmf/common.sh@477 -- # '[' -n 1679747 ']' 00:31:24.655 12:25:37 -- nvmf/common.sh@478 -- # killprocess 1679747 00:31:24.655 12:25:37 -- common/autotest_common.sh@926 -- # '[' -z 1679747 ']' 00:31:24.655 12:25:37 -- common/autotest_common.sh@930 -- # kill -0 1679747 00:31:24.655 12:25:37 -- common/autotest_common.sh@931 -- # uname 00:31:24.655 12:25:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:24.655 12:25:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1679747 00:31:24.916 12:25:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:24.916 12:25:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:24.916 12:25:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1679747' 00:31:24.916 killing process with pid 1679747 00:31:24.916 12:25:37 -- common/autotest_common.sh@945 -- # kill 1679747 00:31:24.916 12:25:37 -- common/autotest_common.sh@950 -- # wait 1679747 00:31:24.916 12:25:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:24.916 12:25:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:24.916 12:25:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:24.916 12:25:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.916 12:25:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:24.916 12:25:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.916 12:25:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.916 12:25:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.457 12:25:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:27.457 00:31:27.457 real 0m23.501s 00:31:27.457 user 0m27.864s 00:31:27.457 sys 0m6.430s 00:31:27.457 12:25:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.457 12:25:39 -- common/autotest_common.sh@10 -- # set +x 00:31:27.457 ************************************ 00:31:27.457 END TEST nvmf_discovery_remove_ifc 00:31:27.457 ************************************ 00:31:27.457 12:25:39 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:31:27.457 12:25:39 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:27.457 12:25:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:27.457 12:25:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.457 12:25:39 -- common/autotest_common.sh@10 -- # set +x 00:31:27.457 ************************************ 00:31:27.457 START TEST nvmf_digest 00:31:27.457 ************************************ 00:31:27.457 12:25:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:27.457 * Looking for test storage... 00:31:27.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:27.457 12:25:40 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.457 12:25:40 -- nvmf/common.sh@7 -- # uname -s 00:31:27.457 12:25:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.457 12:25:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.457 12:25:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.457 12:25:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.457 12:25:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.457 12:25:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.457 12:25:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.457 12:25:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.457 12:25:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.457 12:25:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.457 12:25:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:27.457 12:25:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:27.457 12:25:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.457 12:25:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.457 12:25:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.457 12:25:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.457 12:25:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.457 12:25:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.457 12:25:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.458 12:25:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.458 12:25:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.458 12:25:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.458 12:25:40 -- paths/export.sh@5 -- # export PATH 00:31:27.458 12:25:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.458 12:25:40 -- nvmf/common.sh@46 -- # : 0 00:31:27.458 12:25:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:27.458 12:25:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:27.458 12:25:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:27.458 12:25:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.458 12:25:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.458 12:25:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:27.458 12:25:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:27.458 12:25:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:27.458 12:25:40 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:27.458 12:25:40 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:27.458 12:25:40 -- host/digest.sh@16 -- # runtime=2 00:31:27.458 12:25:40 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:27.458 12:25:40 -- host/digest.sh@132 -- # nvmftestinit 00:31:27.458 12:25:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:27.458 12:25:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.458 12:25:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:27.458 12:25:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:27.458 12:25:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:27.458 12:25:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.458 12:25:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:27.458 12:25:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.458 12:25:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:27.458 12:25:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:27.458 12:25:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:27.458 12:25:40 -- common/autotest_common.sh@10 -- # set +x 00:31:34.035 12:25:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:34.035 12:25:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:34.035 12:25:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:34.035 12:25:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:34.035 12:25:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:34.035 12:25:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:34.035 12:25:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:34.035 12:25:46 -- nvmf/common.sh@294 -- # net_devs=() 00:31:34.035 12:25:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:34.035 12:25:46 -- nvmf/common.sh@295 -- # e810=() 00:31:34.035 12:25:46 -- nvmf/common.sh@295 -- # local -ga e810 00:31:34.035 12:25:46 -- nvmf/common.sh@296 -- # x722=() 00:31:34.035 12:25:46 -- nvmf/common.sh@296 -- # local -ga x722 00:31:34.035 12:25:46 -- nvmf/common.sh@297 -- # mlx=() 00:31:34.035 12:25:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:34.035 12:25:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.035 12:25:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:34.035 12:25:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:34.035 12:25:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:34.035 12:25:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:34.035 12:25:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:34.035 12:25:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:34.035 12:25:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:34.035 12:25:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:34.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:34.036 12:25:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:34.036 12:25:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:34.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:34.036 12:25:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:34.036 12:25:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:34.036 12:25:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.036 12:25:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:34.036 12:25:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.036 12:25:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:34.036 Found net devices under 0000:31:00.0: cvl_0_0 00:31:34.036 12:25:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.036 12:25:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:34.036 12:25:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.036 12:25:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:34.036 12:25:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.036 12:25:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:34.036 Found net devices under 0000:31:00.1: cvl_0_1 00:31:34.036 12:25:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.036 12:25:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:34.036 12:25:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:34.036 12:25:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:34.036 12:25:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:34.036 12:25:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.036 12:25:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.036 12:25:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.036 12:25:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:34.036 12:25:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.036 12:25:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.036 12:25:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:34.036 12:25:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.036 12:25:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.036 12:25:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:34.036 12:25:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:34.036 12:25:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.036 12:25:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.296 12:25:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.296 12:25:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.296 12:25:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:34.296 12:25:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.296 12:25:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.296 12:25:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.296 12:25:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:34.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:31:34.296 00:31:34.296 --- 10.0.0.2 ping statistics --- 00:31:34.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.296 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:31:34.296 12:25:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:31:34.296 00:31:34.296 --- 10.0.0.1 ping statistics --- 00:31:34.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.296 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:31:34.296 12:25:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.296 12:25:47 -- nvmf/common.sh@410 -- # return 0 00:31:34.296 12:25:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:34.296 12:25:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.296 12:25:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:34.296 12:25:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:34.296 12:25:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.296 12:25:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:34.296 12:25:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:34.296 12:25:47 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:34.296 12:25:47 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:31:34.296 12:25:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:34.296 12:25:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:34.296 12:25:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.296 ************************************ 00:31:34.296 START TEST nvmf_digest_clean 00:31:34.296 ************************************ 00:31:34.296 12:25:47 -- common/autotest_common.sh@1104 -- # run_digest 00:31:34.296 12:25:47 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:31:34.296 12:25:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:34.296 12:25:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:34.296 12:25:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.296 12:25:47 -- nvmf/common.sh@469 -- # nvmfpid=1686712 00:31:34.296 12:25:47 -- nvmf/common.sh@470 -- # waitforlisten 1686712 00:31:34.296 12:25:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:34.296 12:25:47 -- common/autotest_common.sh@819 -- # '[' -z 1686712 ']' 00:31:34.296 12:25:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.296 12:25:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:34.296 12:25:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.296 12:25:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:34.296 12:25:47 -- common/autotest_common.sh@10 -- # set +x 00:31:34.556 [2024-06-11 12:25:47.370924] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:34.556 [2024-06-11 12:25:47.371006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.556 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.556 [2024-06-11 12:25:47.442888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.556 [2024-06-11 12:25:47.480025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:34.556 [2024-06-11 12:25:47.480161] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.556 [2024-06-11 12:25:47.480169] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.556 [2024-06-11 12:25:47.480176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.556 [2024-06-11 12:25:47.480209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.125 12:25:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:35.125 12:25:48 -- common/autotest_common.sh@852 -- # return 0 00:31:35.125 12:25:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:35.125 12:25:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:35.125 12:25:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.125 12:25:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.125 12:25:48 -- host/digest.sh@120 -- # common_target_config 00:31:35.125 12:25:48 -- host/digest.sh@43 -- # rpc_cmd 00:31:35.125 12:25:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.125 12:25:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.385 null0 00:31:35.385 [2024-06-11 12:25:48.228506] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.385 [2024-06-11 12:25:48.252687] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.385 12:25:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.385 12:25:48 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:31:35.385 12:25:48 -- host/digest.sh@77 -- # local rw bs qd 00:31:35.385 12:25:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:35.385 12:25:48 -- host/digest.sh@80 -- # rw=randread 00:31:35.385 12:25:48 -- host/digest.sh@80 -- # bs=4096 00:31:35.385 12:25:48 -- host/digest.sh@80 -- # qd=128 00:31:35.385 12:25:48 -- host/digest.sh@82 -- # bperfpid=1686979 00:31:35.385 12:25:48 -- host/digest.sh@83 -- # waitforlisten 1686979 /var/tmp/bperf.sock 00:31:35.385 12:25:48 -- common/autotest_common.sh@819 -- # '[' -z 1686979 ']' 00:31:35.385 12:25:48 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:35.385 12:25:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:35.385 12:25:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:35.385 12:25:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:35.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:35.385 12:25:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:35.385 12:25:48 -- common/autotest_common.sh@10 -- # set +x 00:31:35.385 [2024-06-11 12:25:48.303866] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:35.385 [2024-06-11 12:25:48.303913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686979 ] 00:31:35.385 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.385 [2024-06-11 12:25:48.380349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.385 [2024-06-11 12:25:48.409053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.350 12:25:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:36.351 12:25:49 -- common/autotest_common.sh@852 -- # return 0 00:31:36.351 12:25:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:36.351 12:25:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:36.351 12:25:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:36.351 12:25:49 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:36.351 12:25:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:36.617 nvme0n1 00:31:36.617 12:25:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:36.617 12:25:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:36.877 Running I/O for 2 seconds... 00:31:38.786 00:31:38.786 Latency(us) 00:31:38.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.786 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:38.786 nvme0n1 : 2.04 15982.46 62.43 0.00 0.00 7845.11 2648.75 47841.28 00:31:38.786 =================================================================================================================== 00:31:38.786 Total : 15982.46 62.43 0.00 0.00 7845.11 2648.75 47841.28 00:31:38.786 0 00:31:38.786 12:25:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:38.786 12:25:51 -- host/digest.sh@92 -- # get_accel_stats 00:31:38.786 12:25:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:38.786 12:25:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:38.786 | select(.opcode=="crc32c") 00:31:38.786 | "\(.module_name) \(.executed)"' 00:31:38.786 12:25:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:39.046 12:25:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:39.046 12:25:51 -- host/digest.sh@93 -- # exp_module=software 00:31:39.046 12:25:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:39.046 12:25:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:39.046 12:25:51 -- host/digest.sh@97 -- # killprocess 1686979 00:31:39.046 12:25:51 -- common/autotest_common.sh@926 -- # '[' -z 1686979 ']' 00:31:39.046 12:25:51 -- common/autotest_common.sh@930 -- # kill -0 1686979 00:31:39.046 12:25:51 -- common/autotest_common.sh@931 -- # uname 00:31:39.046 12:25:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:39.046 12:25:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1686979 00:31:39.046 12:25:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:39.046 12:25:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:39.046 12:25:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1686979' 00:31:39.046 killing process with pid 1686979 00:31:39.046 12:25:51 -- common/autotest_common.sh@945 -- # kill 1686979 00:31:39.046 Received shutdown signal, test time was about 2.000000 seconds 00:31:39.046 00:31:39.046 Latency(us) 00:31:39.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.046 =================================================================================================================== 00:31:39.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:39.046 12:25:51 -- common/autotest_common.sh@950 -- # wait 1686979 00:31:39.046 12:25:52 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:39.046 12:25:52 -- host/digest.sh@77 -- # local rw bs qd 00:31:39.046 12:25:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:39.046 12:25:52 -- host/digest.sh@80 -- # rw=randread 00:31:39.046 12:25:52 -- host/digest.sh@80 -- # bs=131072 00:31:39.046 12:25:52 -- host/digest.sh@80 -- # qd=16 00:31:39.046 12:25:52 -- host/digest.sh@82 -- # bperfpid=1687680 00:31:39.046 12:25:52 -- host/digest.sh@83 -- # waitforlisten 1687680 /var/tmp/bperf.sock 00:31:39.046 12:25:52 -- common/autotest_common.sh@819 -- # '[' -z 1687680 ']' 00:31:39.046 12:25:52 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:39.046 12:25:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:39.046 12:25:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:39.046 12:25:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:39.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:39.046 12:25:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:39.046 12:25:52 -- common/autotest_common.sh@10 -- # set +x 00:31:39.306 [2024-06-11 12:25:52.113941] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:39.306 [2024-06-11 12:25:52.113995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687680 ] 00:31:39.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:39.306 Zero copy mechanism will not be used. 00:31:39.306 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.306 [2024-06-11 12:25:52.191706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.306 [2024-06-11 12:25:52.220205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.876 12:25:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:39.876 12:25:52 -- common/autotest_common.sh@852 -- # return 0 00:31:39.876 12:25:52 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:39.876 12:25:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:39.876 12:25:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:40.137 12:25:53 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:40.137 12:25:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:40.397 nvme0n1 00:31:40.397 12:25:53 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:40.397 12:25:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:40.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:40.397 Zero copy mechanism will not be used. 00:31:40.397 Running I/O for 2 seconds... 00:31:42.935 00:31:42.935 Latency(us) 00:31:42.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.935 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:42.935 nvme0n1 : 2.00 6696.42 837.05 0.00 0.00 2386.59 535.89 6717.44 00:31:42.935 =================================================================================================================== 00:31:42.935 Total : 6696.42 837.05 0.00 0.00 2386.59 535.89 6717.44 00:31:42.935 0 00:31:42.935 12:25:55 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:42.935 12:25:55 -- host/digest.sh@92 -- # get_accel_stats 00:31:42.935 12:25:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:42.935 12:25:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:42.935 | select(.opcode=="crc32c") 00:31:42.935 | "\(.module_name) \(.executed)"' 00:31:42.935 12:25:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:42.935 12:25:55 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:42.935 12:25:55 -- host/digest.sh@93 -- # exp_module=software 00:31:42.935 12:25:55 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:42.935 12:25:55 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:42.935 12:25:55 -- host/digest.sh@97 -- # killprocess 1687680 00:31:42.935 12:25:55 -- common/autotest_common.sh@926 -- # '[' -z 1687680 ']' 00:31:42.935 12:25:55 -- common/autotest_common.sh@930 -- # kill -0 1687680 00:31:42.935 12:25:55 -- common/autotest_common.sh@931 -- # uname 00:31:42.935 12:25:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:42.935 12:25:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1687680 00:31:42.935 12:25:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:42.935 12:25:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:42.935 12:25:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1687680' 00:31:42.935 killing process with pid 1687680 00:31:42.935 12:25:55 -- common/autotest_common.sh@945 -- # kill 1687680 00:31:42.935 Received shutdown signal, test time was about 2.000000 seconds 00:31:42.935 00:31:42.935 Latency(us) 00:31:42.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.935 =================================================================================================================== 00:31:42.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.935 12:25:55 -- common/autotest_common.sh@950 -- # wait 1687680 00:31:42.935 12:25:55 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:42.935 12:25:55 -- host/digest.sh@77 -- # local rw bs qd 00:31:42.935 12:25:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:42.935 12:25:55 -- host/digest.sh@80 -- # rw=randwrite 00:31:42.935 12:25:55 -- host/digest.sh@80 -- # bs=4096 00:31:42.935 12:25:55 -- host/digest.sh@80 -- # qd=128 00:31:42.935 12:25:55 -- host/digest.sh@82 -- # bperfpid=1688367 00:31:42.935 12:25:55 -- host/digest.sh@83 -- # waitforlisten 1688367 /var/tmp/bperf.sock 00:31:42.935 12:25:55 -- common/autotest_common.sh@819 -- # '[' -z 1688367 ']' 00:31:42.935 12:25:55 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:42.935 12:25:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:42.935 12:25:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:42.935 12:25:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:42.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:42.935 12:25:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:42.935 12:25:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.935 [2024-06-11 12:25:55.752422] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:42.935 [2024-06-11 12:25:55.752474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688367 ] 00:31:42.935 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.935 [2024-06-11 12:25:55.829546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.935 [2024-06-11 12:25:55.854722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.506 12:25:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:43.506 12:25:56 -- common/autotest_common.sh@852 -- # return 0 00:31:43.506 12:25:56 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:43.506 12:25:56 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:43.506 12:25:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:43.766 12:25:56 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:43.766 12:25:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:44.026 nvme0n1 00:31:44.026 12:25:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:44.026 12:25:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:44.026 Running I/O for 2 seconds... 00:31:46.567 00:31:46.567 Latency(us) 00:31:46.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.567 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.567 nvme0n1 : 2.00 22805.26 89.08 0.00 0.00 5607.30 3959.47 10922.67 00:31:46.567 =================================================================================================================== 00:31:46.567 Total : 22805.26 89.08 0.00 0.00 5607.30 3959.47 10922.67 00:31:46.567 0 00:31:46.567 12:25:59 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:46.567 12:25:59 -- host/digest.sh@92 -- # get_accel_stats 00:31:46.567 12:25:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:46.567 12:25:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:46.567 | select(.opcode=="crc32c") 00:31:46.567 | "\(.module_name) \(.executed)"' 00:31:46.567 12:25:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:46.567 12:25:59 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:46.567 12:25:59 -- host/digest.sh@93 -- # exp_module=software 00:31:46.567 12:25:59 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:46.567 12:25:59 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:46.567 12:25:59 -- host/digest.sh@97 -- # killprocess 1688367 00:31:46.567 12:25:59 -- common/autotest_common.sh@926 -- # '[' -z 1688367 ']' 00:31:46.567 12:25:59 -- common/autotest_common.sh@930 -- # kill -0 1688367 00:31:46.567 12:25:59 -- common/autotest_common.sh@931 -- # uname 00:31:46.567 12:25:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:46.567 12:25:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1688367 00:31:46.567 12:25:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:46.567 12:25:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:46.567 12:25:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1688367' 00:31:46.567 killing process with pid 1688367 00:31:46.567 12:25:59 -- common/autotest_common.sh@945 -- # kill 1688367 00:31:46.567 Received shutdown signal, test time was about 2.000000 seconds 00:31:46.567 00:31:46.567 Latency(us) 00:31:46.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.567 =================================================================================================================== 00:31:46.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.567 12:25:59 -- common/autotest_common.sh@950 -- # wait 1688367 00:31:46.567 12:25:59 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:31:46.567 12:25:59 -- host/digest.sh@77 -- # local rw bs qd 00:31:46.568 12:25:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:46.568 12:25:59 -- host/digest.sh@80 -- # rw=randwrite 00:31:46.568 12:25:59 -- host/digest.sh@80 -- # bs=131072 00:31:46.568 12:25:59 -- host/digest.sh@80 -- # qd=16 00:31:46.568 12:25:59 -- host/digest.sh@82 -- # bperfpid=1689061 00:31:46.568 12:25:59 -- host/digest.sh@83 -- # waitforlisten 1689061 /var/tmp/bperf.sock 00:31:46.568 12:25:59 -- common/autotest_common.sh@819 -- # '[' -z 1689061 ']' 00:31:46.568 12:25:59 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:46.568 12:25:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:46.568 12:25:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.568 12:25:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:46.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:46.568 12:25:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.568 12:25:59 -- common/autotest_common.sh@10 -- # set +x 00:31:46.568 [2024-06-11 12:25:59.416497] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:46.568 [2024-06-11 12:25:59.416553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689061 ] 00:31:46.568 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:46.568 Zero copy mechanism will not be used. 00:31:46.568 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.568 [2024-06-11 12:25:59.491894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.568 [2024-06-11 12:25:59.518536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.506 12:26:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.506 12:26:00 -- common/autotest_common.sh@852 -- # return 0 00:31:47.506 12:26:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:47.506 12:26:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:47.506 12:26:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:47.506 12:26:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.506 12:26:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.765 nvme0n1 00:31:48.026 12:26:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:48.026 12:26:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:48.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:48.026 Zero copy mechanism will not be used. 00:31:48.026 Running I/O for 2 seconds... 00:31:49.937 00:31:49.937 Latency(us) 00:31:49.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.937 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:49.937 nvme0n1 : 2.00 5754.20 719.27 0.00 0.00 2777.53 1351.68 11414.19 00:31:49.937 =================================================================================================================== 00:31:49.937 Total : 5754.20 719.27 0.00 0.00 2777.53 1351.68 11414.19 00:31:49.937 0 00:31:49.937 12:26:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:49.937 12:26:02 -- host/digest.sh@92 -- # get_accel_stats 00:31:49.937 12:26:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:49.937 12:26:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:49.937 | select(.opcode=="crc32c") 00:31:49.937 | "\(.module_name) \(.executed)"' 00:31:49.937 12:26:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:50.197 12:26:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:50.197 12:26:03 -- host/digest.sh@93 -- # exp_module=software 00:31:50.197 12:26:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:50.197 12:26:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:50.197 12:26:03 -- host/digest.sh@97 -- # killprocess 1689061 00:31:50.197 12:26:03 -- common/autotest_common.sh@926 -- # '[' -z 1689061 ']' 00:31:50.197 12:26:03 -- common/autotest_common.sh@930 -- # kill -0 1689061 00:31:50.197 12:26:03 -- common/autotest_common.sh@931 -- # uname 00:31:50.197 12:26:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:50.197 12:26:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1689061 00:31:50.197 12:26:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:50.197 12:26:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:50.197 12:26:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1689061' 00:31:50.197 killing process with pid 1689061 00:31:50.197 12:26:03 -- common/autotest_common.sh@945 -- # kill 1689061 00:31:50.197 Received shutdown signal, test time was about 2.000000 seconds 00:31:50.197 00:31:50.197 Latency(us) 00:31:50.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.197 =================================================================================================================== 00:31:50.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.197 12:26:03 -- common/autotest_common.sh@950 -- # wait 1689061 00:31:50.197 12:26:03 -- host/digest.sh@126 -- # killprocess 1686712 00:31:50.197 12:26:03 -- common/autotest_common.sh@926 -- # '[' -z 1686712 ']' 00:31:50.197 12:26:03 -- common/autotest_common.sh@930 -- # kill -0 1686712 00:31:50.197 12:26:03 -- common/autotest_common.sh@931 -- # uname 00:31:50.197 12:26:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:50.197 12:26:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1686712 00:31:50.457 12:26:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:50.457 12:26:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:50.457 12:26:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1686712' 00:31:50.457 killing process with pid 1686712 00:31:50.457 12:26:03 -- common/autotest_common.sh@945 -- # kill 1686712 00:31:50.457 12:26:03 -- common/autotest_common.sh@950 -- # wait 1686712 00:31:50.457 00:31:50.457 real 0m16.094s 00:31:50.457 user 0m31.451s 00:31:50.457 sys 0m3.470s 00:31:50.457 12:26:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:50.457 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.457 ************************************ 00:31:50.457 END TEST nvmf_digest_clean 00:31:50.457 ************************************ 00:31:50.457 12:26:03 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:31:50.457 12:26:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:50.457 12:26:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:50.457 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.457 ************************************ 00:31:50.457 START TEST nvmf_digest_error 00:31:50.457 ************************************ 00:31:50.457 12:26:03 -- common/autotest_common.sh@1104 -- # run_digest_error 00:31:50.457 12:26:03 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:31:50.457 12:26:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:50.457 12:26:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:50.457 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.457 12:26:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:50.457 12:26:03 -- nvmf/common.sh@469 -- # nvmfpid=1689948 00:31:50.457 12:26:03 -- nvmf/common.sh@470 -- # waitforlisten 1689948 00:31:50.457 12:26:03 -- common/autotest_common.sh@819 -- # '[' -z 1689948 ']' 00:31:50.457 12:26:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.457 12:26:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:50.457 12:26:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.457 12:26:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:50.457 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.457 [2024-06-11 12:26:03.479759] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:50.457 [2024-06-11 12:26:03.479814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.717 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.717 [2024-06-11 12:26:03.542739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.717 [2024-06-11 12:26:03.570980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:50.717 [2024-06-11 12:26:03.571104] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.717 [2024-06-11 12:26:03.571114] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.717 [2024-06-11 12:26:03.571121] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.717 [2024-06-11 12:26:03.571143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.717 12:26:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:50.718 12:26:03 -- common/autotest_common.sh@852 -- # return 0 00:31:50.718 12:26:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:50.718 12:26:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:50.718 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.718 12:26:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.718 12:26:03 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:50.718 12:26:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.718 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.718 [2024-06-11 12:26:03.647596] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:50.718 12:26:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.718 12:26:03 -- host/digest.sh@104 -- # common_target_config 00:31:50.718 12:26:03 -- host/digest.sh@43 -- # rpc_cmd 00:31:50.718 12:26:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.718 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.718 null0 00:31:50.718 [2024-06-11 12:26:03.722158] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.718 [2024-06-11 12:26:03.746336] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.718 12:26:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.718 12:26:03 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:31:50.718 12:26:03 -- host/digest.sh@54 -- # local rw bs qd 00:31:50.718 12:26:03 -- host/digest.sh@56 -- # rw=randread 00:31:50.979 12:26:03 -- host/digest.sh@56 -- # bs=4096 00:31:50.979 12:26:03 -- host/digest.sh@56 -- # qd=128 00:31:50.979 12:26:03 -- host/digest.sh@58 -- # bperfpid=1690101 00:31:50.979 12:26:03 -- host/digest.sh@60 -- # waitforlisten 1690101 /var/tmp/bperf.sock 00:31:50.979 12:26:03 -- common/autotest_common.sh@819 -- # '[' -z 1690101 ']' 00:31:50.979 12:26:03 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:50.979 12:26:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:50.979 12:26:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:50.979 12:26:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:50.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:50.979 12:26:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:50.979 12:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:50.979 [2024-06-11 12:26:03.797780] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:50.979 [2024-06-11 12:26:03.797826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690101 ] 00:31:50.979 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.979 [2024-06-11 12:26:03.873882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.979 [2024-06-11 12:26:03.900698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.549 12:26:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:51.549 12:26:04 -- common/autotest_common.sh@852 -- # return 0 00:31:51.549 12:26:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:51.549 12:26:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:51.808 12:26:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:51.808 12:26:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:51.808 12:26:04 -- common/autotest_common.sh@10 -- # set +x 00:31:51.808 12:26:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:51.808 12:26:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.808 12:26:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:52.069 nvme0n1 00:31:52.069 12:26:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:52.069 12:26:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:52.069 12:26:04 -- common/autotest_common.sh@10 -- # set +x 00:31:52.069 12:26:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.069 12:26:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:52.069 12:26:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:52.069 Running I/O for 2 seconds... 00:31:52.069 [2024-06-11 12:26:05.096149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.069 [2024-06-11 12:26:05.096179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.069 [2024-06-11 12:26:05.096187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.105340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.105361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.105369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.119870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.119893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.119900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.132432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.132450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.132457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.146746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.146765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.146771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.161340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.161358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.161365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.176054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.176072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.190674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.190692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.190699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.330 [2024-06-11 12:26:05.205573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.330 [2024-06-11 12:26:05.205592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.330 [2024-06-11 12:26:05.205598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.219959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.219977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.219984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.234483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.234502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.234508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.248619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.248638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.248644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.262878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.262896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.262903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.277517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.277535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.277541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.291906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.291924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.291930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.306835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.306854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.306861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.320982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.321000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.321007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.335271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.335289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.335295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.349961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.349979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.349985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.331 [2024-06-11 12:26:05.364388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.331 [2024-06-11 12:26:05.364405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.331 [2024-06-11 12:26:05.364415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.591 [2024-06-11 12:26:05.373313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.591 [2024-06-11 12:26:05.373331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.591 [2024-06-11 12:26:05.373338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.591 [2024-06-11 12:26:05.387542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.591 [2024-06-11 12:26:05.387559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.591 [2024-06-11 12:26:05.387566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.591 [2024-06-11 12:26:05.402116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.591 [2024-06-11 12:26:05.402133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.591 [2024-06-11 12:26:05.402139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.591 [2024-06-11 12:26:05.416541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.591 [2024-06-11 12:26:05.416559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.591 [2024-06-11 12:26:05.416566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.591 [2024-06-11 12:26:05.431304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.431321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.431327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.446449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.446468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.446474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.460473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.460492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.460499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.474607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.474624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.474631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.489652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.489670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.489676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.504082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.504100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.504106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.518470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.518488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.518495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.532588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.532606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.532613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.547056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.547074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.547080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.560727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.560745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.560751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.574707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.574725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.574731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.589777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.589794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.589800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.603677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.603695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.603704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.592 [2024-06-11 12:26:05.618454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.592 [2024-06-11 12:26:05.618472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.592 [2024-06-11 12:26:05.618478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.853 [2024-06-11 12:26:05.632534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.853 [2024-06-11 12:26:05.632552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.853 [2024-06-11 12:26:05.632558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.853 [2024-06-11 12:26:05.646785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.853 [2024-06-11 12:26:05.646802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.853 [2024-06-11 12:26:05.646808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.853 [2024-06-11 12:26:05.661051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.853 [2024-06-11 12:26:05.661070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.853 [2024-06-11 12:26:05.661077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.853 [2024-06-11 12:26:05.675330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.853 [2024-06-11 12:26:05.675348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.853 [2024-06-11 12:26:05.675354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.853 [2024-06-11 12:26:05.689927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.853 [2024-06-11 12:26:05.689945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.853 [2024-06-11 12:26:05.689951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.853 [2024-06-11 12:26:05.704212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.853 [2024-06-11 12:26:05.704230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.704236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.718749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.718767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.718774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.733518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.733539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.733545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.748918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.748942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.763285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.763303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.763309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.778513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.778530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.778537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.792946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.792964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.792970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.807406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.807423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.807430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.822379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.822396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.822403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.836502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.836519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.836525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.850923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.850940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.850946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.866024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.866041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.866047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.854 [2024-06-11 12:26:05.880192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:52.854 [2024-06-11 12:26:05.880210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:52.854 [2024-06-11 12:26:05.880217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.895342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.895360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.895367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.909777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.909794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.909801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.924486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.924504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.924510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.938788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.938805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.938812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.953712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.953730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.953736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.968203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.968221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.968227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.982949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.982967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.982977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:05.997164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:05.997182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:05.997189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:06.011572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:06.011590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:06.011597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:06.026045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:06.026062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:06.026069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:06.041305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:06.041322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:06.041328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:06.055581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:06.055599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:06.055605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.115 [2024-06-11 12:26:06.070157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.115 [2024-06-11 12:26:06.070175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.115 [2024-06-11 12:26:06.070181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.116 [2024-06-11 12:26:06.084419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.116 [2024-06-11 12:26:06.084436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.116 [2024-06-11 12:26:06.084443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.116 [2024-06-11 12:26:06.099443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.116 [2024-06-11 12:26:06.099461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.116 [2024-06-11 12:26:06.099467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.116 [2024-06-11 12:26:06.114373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.116 [2024-06-11 12:26:06.114391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.116 [2024-06-11 12:26:06.114397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.116 [2024-06-11 12:26:06.128289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.116 [2024-06-11 12:26:06.128306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.116 [2024-06-11 12:26:06.128313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.116 [2024-06-11 12:26:06.143228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.116 [2024-06-11 12:26:06.143245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.116 [2024-06-11 12:26:06.143252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.376 [2024-06-11 12:26:06.158002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.376 [2024-06-11 12:26:06.158023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.376 [2024-06-11 12:26:06.158031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.376 [2024-06-11 12:26:06.166834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.376 [2024-06-11 12:26:06.166851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.376 [2024-06-11 12:26:06.166858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.376 [2024-06-11 12:26:06.187142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.376 [2024-06-11 12:26:06.187161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.376 [2024-06-11 12:26:06.187167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.376 [2024-06-11 12:26:06.200403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.200421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.200427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.209831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.209848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.209855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.221934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.221951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.221961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.236265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.236283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.236289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.249283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.249301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.249307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.263683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.263700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.263707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.278493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.278511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.278517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.293619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.293637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.293643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.308261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.308278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.308285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.322194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.322212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.322218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.336468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.336485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.336492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.351185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.351206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.351212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.365746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.365764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.365770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.380979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.380997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.381003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.395097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.395114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.395120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.377 [2024-06-11 12:26:06.409256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.377 [2024-06-11 12:26:06.409274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.377 [2024-06-11 12:26:06.409280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.423542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.423560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.423566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.438154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.438171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.438178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.451561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.451578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.451585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.465684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.465702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.465708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.480323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.480341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.480347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.494707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.494724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.494731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.509512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.509530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.509536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.524047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.524065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.524071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.638 [2024-06-11 12:26:06.538999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.638 [2024-06-11 12:26:06.539019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.638 [2024-06-11 12:26:06.539026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.553438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.553456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.553462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.568327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.568345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.568351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.582782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.582799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.582806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.597370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.597388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.597398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.612365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.612383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.612389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.626587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.626605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.626611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.641538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.641555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.641561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.655743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.655761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.655768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.639 [2024-06-11 12:26:06.670411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.639 [2024-06-11 12:26:06.670428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.639 [2024-06-11 12:26:06.670434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.899 [2024-06-11 12:26:06.684757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.899 [2024-06-11 12:26:06.684775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.899 [2024-06-11 12:26:06.684781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.699780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.699798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.699805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.713735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.713753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.713759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.728413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.728431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.728437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.742796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.742814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.742821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.757649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.757667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.757673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.771914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.771932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.771938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.786930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.786947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.786954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.801325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.801343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.801349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.816346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.816364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.816371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.830485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.830503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.830509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.845052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.845069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.845082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.859330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.859348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.859354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.874092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.874110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.874116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.889006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.889028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.889034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.903305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.903323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.903329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.917781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.917799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.917806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.900 [2024-06-11 12:26:06.932465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:53.900 [2024-06-11 12:26:06.932483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.900 [2024-06-11 12:26:06.932489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:06.946758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:06.946776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:06.946783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:06.959734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:06.959752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:06.959758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:06.972593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:06.972614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:06.972620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:06.983551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:06.983569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:06.983575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:06.994345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:06.994363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:06.994369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:07.007852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:07.007870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:07.007876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:07.021331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:07.021348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:07.021355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:07.033663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:07.033680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:07.033686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:07.048326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:07.048343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:07.048350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:07.062290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:07.062308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:07.062315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 [2024-06-11 12:26:07.076466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e2470) 00:31:54.161 [2024-06-11 12:26:07.076485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-06-11 12:26:07.076491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:54.161 00:31:54.161 Latency(us) 00:31:54.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.161 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:54.161 nvme0n1 : 2.01 17879.38 69.84 0.00 0.00 7152.60 1952.43 21736.11 00:31:54.161 =================================================================================================================== 00:31:54.161 Total : 17879.38 69.84 0.00 0.00 7152.60 1952.43 21736.11 00:31:54.161 0 00:31:54.161 12:26:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:54.161 12:26:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:54.161 12:26:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:54.161 | .driver_specific 00:31:54.161 | .nvme_error 00:31:54.161 | .status_code 00:31:54.161 | .command_transient_transport_error' 00:31:54.161 12:26:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:54.422 12:26:07 -- host/digest.sh@71 -- # (( 140 > 0 )) 00:31:54.422 12:26:07 -- host/digest.sh@73 -- # killprocess 1690101 00:31:54.422 12:26:07 -- common/autotest_common.sh@926 -- # '[' -z 1690101 ']' 00:31:54.422 12:26:07 -- common/autotest_common.sh@930 -- # kill -0 1690101 00:31:54.422 12:26:07 -- common/autotest_common.sh@931 -- # uname 00:31:54.422 12:26:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:54.422 12:26:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1690101 00:31:54.422 12:26:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:54.422 12:26:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:54.422 12:26:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1690101' 00:31:54.422 killing process with pid 1690101 00:31:54.422 12:26:07 -- common/autotest_common.sh@945 -- # kill 1690101 00:31:54.422 Received shutdown signal, test time was about 2.000000 seconds 00:31:54.422 00:31:54.422 Latency(us) 00:31:54.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.422 =================================================================================================================== 00:31:54.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:54.422 12:26:07 -- common/autotest_common.sh@950 -- # wait 1690101 00:31:54.422 12:26:07 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:31:54.422 12:26:07 -- host/digest.sh@54 -- # local rw bs qd 00:31:54.422 12:26:07 -- host/digest.sh@56 -- # rw=randread 00:31:54.422 12:26:07 -- host/digest.sh@56 -- # bs=131072 00:31:54.422 12:26:07 -- host/digest.sh@56 -- # qd=16 00:31:54.422 12:26:07 -- host/digest.sh@58 -- # bperfpid=1690814 00:31:54.422 12:26:07 -- host/digest.sh@60 -- # waitforlisten 1690814 /var/tmp/bperf.sock 00:31:54.422 12:26:07 -- common/autotest_common.sh@819 -- # '[' -z 1690814 ']' 00:31:54.422 12:26:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:54.422 12:26:07 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:54.422 12:26:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:54.422 12:26:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:54.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:54.422 12:26:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:54.422 12:26:07 -- common/autotest_common.sh@10 -- # set +x 00:31:54.422 [2024-06-11 12:26:07.444925] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:54.422 [2024-06-11 12:26:07.444978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690814 ] 00:31:54.422 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:54.422 Zero copy mechanism will not be used. 00:31:54.683 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.683 [2024-06-11 12:26:07.520166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.683 [2024-06-11 12:26:07.546680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.252 12:26:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:55.252 12:26:08 -- common/autotest_common.sh@852 -- # return 0 00:31:55.252 12:26:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:55.252 12:26:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:55.512 12:26:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:55.512 12:26:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.512 12:26:08 -- common/autotest_common.sh@10 -- # set +x 00:31:55.512 12:26:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.512 12:26:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.512 12:26:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.773 nvme0n1 00:31:55.773 12:26:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:55.773 12:26:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.773 12:26:08 -- common/autotest_common.sh@10 -- # set +x 00:31:55.773 12:26:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.773 12:26:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:55.773 12:26:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:55.773 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:55.773 Zero copy mechanism will not be used. 00:31:55.773 Running I/O for 2 seconds... 00:31:55.773 [2024-06-11 12:26:08.700676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.700708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.700716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.712354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.712376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.712383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.724774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.724794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.724800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.736214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.736234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.736240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.747228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.747246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.747257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.756491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.756510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.756517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.764701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.764719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.764726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.775120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.775138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.775144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.786532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.786551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.786557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.795965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.795984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.773 [2024-06-11 12:26:08.795990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.773 [2024-06-11 12:26:08.803910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:55.773 [2024-06-11 12:26:08.803928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.774 [2024-06-11 12:26:08.803935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.813325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.813344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.813350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.824711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.824730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.824737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.836266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.836287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.836294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.847362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.847381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.847387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.859120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.859138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.859145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.872100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.872118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.872125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.885024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.885042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.885049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.898240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.898258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.898264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.910330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.910348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.910355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.923209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.923228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.923234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.935978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.935997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.936003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.948861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.948879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.948885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.961755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.961774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.035 [2024-06-11 12:26:08.961780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.035 [2024-06-11 12:26:08.973821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.035 [2024-06-11 12:26:08.973839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:08.973846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:08.986408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:08.986426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:08.986433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:08.998386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:08.998405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:08.998412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:09.010583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:09.010600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:09.010607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:09.023790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:09.023808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:09.023814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:09.034667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:09.034685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:09.034691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:09.042271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:09.042289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:09.042300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:09.053064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:09.053083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:09.053089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.036 [2024-06-11 12:26:09.061569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.036 [2024-06-11 12:26:09.061586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.036 [2024-06-11 12:26:09.061593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.297 [2024-06-11 12:26:09.070073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.297 [2024-06-11 12:26:09.070091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.297 [2024-06-11 12:26:09.070098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.297 [2024-06-11 12:26:09.079132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.079149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.079156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.086082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.086101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.086107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.095400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.095419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.095426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.103856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.103874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.103881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.110138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.110157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.110164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.115983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.116002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.116008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.123767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.123786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.123792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.132367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.132385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.132392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.139942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.139959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.139966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.147380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.147398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.147404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.154306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.154324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.154330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.157655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.157673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.157679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.165743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.165761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.165767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.173410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.173427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.173437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.183299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.183317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.183323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.192049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.192068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.192074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.202229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.202247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.202253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.212102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.212120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.212126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.222377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.222396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.222402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.230421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.230439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.230445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.234581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.234600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.234606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.240599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.240618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.240624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.249161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.249183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.249189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.258531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.258548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.258555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.298 [2024-06-11 12:26:09.267891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.298 [2024-06-11 12:26:09.267909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.298 [2024-06-11 12:26:09.267915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.299 [2024-06-11 12:26:09.275013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.299 [2024-06-11 12:26:09.275035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.299 [2024-06-11 12:26:09.275042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.299 [2024-06-11 12:26:09.283635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.299 [2024-06-11 12:26:09.283653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.299 [2024-06-11 12:26:09.283659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.299 [2024-06-11 12:26:09.291359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.299 [2024-06-11 12:26:09.291377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.299 [2024-06-11 12:26:09.291384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.299 [2024-06-11 12:26:09.300935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.299 [2024-06-11 12:26:09.300954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.299 [2024-06-11 12:26:09.300961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.299 [2024-06-11 12:26:09.309992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.299 [2024-06-11 12:26:09.310010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.299 [2024-06-11 12:26:09.310021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.299 [2024-06-11 12:26:09.319162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.299 [2024-06-11 12:26:09.319180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.299 [2024-06-11 12:26:09.319186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.299 [2024-06-11 12:26:09.327200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.299 [2024-06-11 12:26:09.327218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.299 [2024-06-11 12:26:09.327224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.332184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.332203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.332209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.335323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.335341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.335347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.341600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.341619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.341625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.345361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.345379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.345386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.351400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.351418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.351424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.354356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.354373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.354380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.357147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.357165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.357171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.359953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.359971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.359981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.362320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.362339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.560 [2024-06-11 12:26:09.362345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.560 [2024-06-11 12:26:09.364985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.560 [2024-06-11 12:26:09.365003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.365009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.370639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.370656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.370663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.377331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.377348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.377355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.386085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.386103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.386110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.394094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.394112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.394118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.403825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.403844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.403850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.412246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.412265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.412273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.422057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.422078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.422084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.429345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.429364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.429370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.438616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.438635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.438642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.448611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.448630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.448636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.459853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.459871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.459877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.470717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.470736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.470742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.483609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.483628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.483634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.495679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.495698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.495704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.507091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.507110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.507116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.517645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.517664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.517670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.526709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.526728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.526734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.535213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.535230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.535237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.540519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.540537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.540544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.549014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.549037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.549044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.557816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.557835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.557841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.568153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.568172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.568178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.579854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.579873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.579879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.561 [2024-06-11 12:26:09.591229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.561 [2024-06-11 12:26:09.591248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.561 [2024-06-11 12:26:09.591257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.602185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.602204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.602210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.612829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.612847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.612853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.623115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.623133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.623140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.633726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.633744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.633750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.642885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.642903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.642909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.654121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.654139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.654145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.662686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.662704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.662710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.672289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.672307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.672313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.680803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.680825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.680831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.687188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.687207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.687213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.698084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.698103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.698109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.707539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.707558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.707564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.718358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.718377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.822 [2024-06-11 12:26:09.718383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.822 [2024-06-11 12:26:09.729467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.822 [2024-06-11 12:26:09.729486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.729493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.741477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.741496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.741502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.753317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.753336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.753342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.765578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.765597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.765603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.778423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.778442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.778449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.790592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.790611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.790617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.802989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.803007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.803014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.813687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.813706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.813712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.820878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.820897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.820904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.832804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.832823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.832829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.844606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.844625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.844632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-06-11 12:26:09.855157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:56.823 [2024-06-11 12:26:09.855176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-06-11 12:26:09.855183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.866057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.866077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.866087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.876104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.876123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.876129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.886827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.886845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.886852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.898406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.898425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.898431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.908337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.908356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.908362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.919025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.919044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.919050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.929064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.929083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.929089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.936305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.936323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.936330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.943725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.943743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.943749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.952326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.952345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.952351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.963234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.963252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.963259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.972166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.972184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.972190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.981604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.981623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.981629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.990058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.990076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.990082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:09.999650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:09.999669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:09.999675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:10.005141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:10.005160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:10.005166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:10.010176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:10.010195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:10.010201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.088 [2024-06-11 12:26:10.018413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.088 [2024-06-11 12:26:10.018431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.088 [2024-06-11 12:26:10.018441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.023810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.023830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.023837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.035174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.035193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.035199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.043960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.043980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.043989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.054702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.054722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.054731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.065398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.065417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.065423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.075992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.076010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.076022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.087268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.087288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.087295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.099290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.099308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.099315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.109891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.109913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.109919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.089 [2024-06-11 12:26:10.117500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.089 [2024-06-11 12:26:10.117518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.089 [2024-06-11 12:26:10.117525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.125420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.125438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.125445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.130080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.130098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.130104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.141307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.141326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.141333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.150893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.150913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.150920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.160266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.160285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.160292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.170459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.170478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.170485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.178804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.178823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.178829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.189822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.189841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.189847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.199925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.199942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.199949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.211164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.211183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.211191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.221093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.221111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.221117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.232268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.232287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.232293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.242823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.242841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.242848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.251098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.251117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.251124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.260213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.260231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.260238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.271077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.271096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.271106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.280237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.280256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.280262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.290719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.290738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.290745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.300877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.300896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.300902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.310992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.311011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.311022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.320015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.320038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.320045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.329832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.329852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.329858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.339485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.339504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.339510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.349973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.349992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.349998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.361201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.361224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.361230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.370570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.394 [2024-06-11 12:26:10.370589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.394 [2024-06-11 12:26:10.370596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.394 [2024-06-11 12:26:10.378829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.395 [2024-06-11 12:26:10.378847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.395 [2024-06-11 12:26:10.378854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.395 [2024-06-11 12:26:10.389780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.395 [2024-06-11 12:26:10.389799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.395 [2024-06-11 12:26:10.389806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.395 [2024-06-11 12:26:10.400793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.395 [2024-06-11 12:26:10.400812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.395 [2024-06-11 12:26:10.400819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.395 [2024-06-11 12:26:10.412285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.395 [2024-06-11 12:26:10.412304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.395 [2024-06-11 12:26:10.412310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.395 [2024-06-11 12:26:10.421987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.395 [2024-06-11 12:26:10.422007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.395 [2024-06-11 12:26:10.422013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.656 [2024-06-11 12:26:10.429827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.656 [2024-06-11 12:26:10.429846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.656 [2024-06-11 12:26:10.429853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.656 [2024-06-11 12:26:10.440328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.656 [2024-06-11 12:26:10.440347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.440353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.446828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.446847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.446854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.453966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.453984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.453991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.463053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.463072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.463078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.474228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.474246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.474253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.482956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.482975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.482981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.490932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.490951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.490957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.501340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.501358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.501365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.512174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.512192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.512199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.521888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.521907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.521917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.529053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.529071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.529078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.538514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.538533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.538540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.545153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.545172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.545178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.553640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.553658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.553664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.560893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.560912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.560918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.569558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.569577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.569583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.576697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.576715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.576722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.585905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.585923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.585929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.595055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.595076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.595083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.600821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.600840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.600846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.609431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.609450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.609457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.619048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.619066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.619073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.626553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.626572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.626578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.632196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.632215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.632221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.640111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.640130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.657 [2024-06-11 12:26:10.640136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.657 [2024-06-11 12:26:10.646369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.657 [2024-06-11 12:26:10.646388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.658 [2024-06-11 12:26:10.646394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.658 [2024-06-11 12:26:10.651285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.658 [2024-06-11 12:26:10.651303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.658 [2024-06-11 12:26:10.651309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.658 [2024-06-11 12:26:10.661043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.658 [2024-06-11 12:26:10.661061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.658 [2024-06-11 12:26:10.661067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.658 [2024-06-11 12:26:10.668746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.658 [2024-06-11 12:26:10.668765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.658 [2024-06-11 12:26:10.668771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.658 [2024-06-11 12:26:10.674283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.658 [2024-06-11 12:26:10.674302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.658 [2024-06-11 12:26:10.674308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.658 [2024-06-11 12:26:10.682094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd2d00) 00:31:57.658 [2024-06-11 12:26:10.682113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.658 [2024-06-11 12:26:10.682119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.658 00:31:57.658 Latency(us) 00:31:57.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.658 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:57.658 nvme0n1 : 2.00 3358.11 419.76 0.00 0.00 4761.85 600.75 15073.28 00:31:57.658 =================================================================================================================== 00:31:57.658 Total : 3358.11 419.76 0.00 0.00 4761.85 600.75 15073.28 00:31:57.918 0 00:31:57.918 12:26:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:57.918 12:26:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:57.918 12:26:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:57.918 | .driver_specific 00:31:57.918 | .nvme_error 00:31:57.918 | .status_code 00:31:57.918 | .command_transient_transport_error' 00:31:57.918 12:26:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:57.918 12:26:10 -- host/digest.sh@71 -- # (( 216 > 0 )) 00:31:57.918 12:26:10 -- host/digest.sh@73 -- # killprocess 1690814 00:31:57.918 12:26:10 -- common/autotest_common.sh@926 -- # '[' -z 1690814 ']' 00:31:57.918 12:26:10 -- common/autotest_common.sh@930 -- # kill -0 1690814 00:31:57.918 12:26:10 -- common/autotest_common.sh@931 -- # uname 00:31:57.918 12:26:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:57.918 12:26:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1690814 00:31:57.918 12:26:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:57.918 12:26:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:57.918 12:26:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1690814' 00:31:57.918 killing process with pid 1690814 00:31:57.918 12:26:10 -- common/autotest_common.sh@945 -- # kill 1690814 00:31:57.918 Received shutdown signal, test time was about 2.000000 seconds 00:31:57.918 00:31:57.918 Latency(us) 00:31:57.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.918 =================================================================================================================== 00:31:57.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.918 12:26:10 -- common/autotest_common.sh@950 -- # wait 1690814 00:31:58.177 12:26:11 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:31:58.177 12:26:11 -- host/digest.sh@54 -- # local rw bs qd 00:31:58.177 12:26:11 -- host/digest.sh@56 -- # rw=randwrite 00:31:58.177 12:26:11 -- host/digest.sh@56 -- # bs=4096 00:31:58.177 12:26:11 -- host/digest.sh@56 -- # qd=128 00:31:58.177 12:26:11 -- host/digest.sh@58 -- # bperfpid=1691500 00:31:58.177 12:26:11 -- host/digest.sh@60 -- # waitforlisten 1691500 /var/tmp/bperf.sock 00:31:58.178 12:26:11 -- common/autotest_common.sh@819 -- # '[' -z 1691500 ']' 00:31:58.178 12:26:11 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:58.178 12:26:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:58.178 12:26:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:58.178 12:26:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:58.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:58.178 12:26:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:58.178 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:31:58.178 [2024-06-11 12:26:11.068322] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:58.178 [2024-06-11 12:26:11.068379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691500 ] 00:31:58.178 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.178 [2024-06-11 12:26:11.144886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.178 [2024-06-11 12:26:11.171459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.118 12:26:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:59.118 12:26:11 -- common/autotest_common.sh@852 -- # return 0 00:31:59.118 12:26:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:59.118 12:26:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:59.118 12:26:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:59.118 12:26:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.118 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:31:59.118 12:26:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.118 12:26:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.118 12:26:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.378 nvme0n1 00:31:59.378 12:26:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:59.378 12:26:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.378 12:26:12 -- common/autotest_common.sh@10 -- # set +x 00:31:59.378 12:26:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.378 12:26:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:59.378 12:26:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:59.638 Running I/O for 2 seconds... 00:31:59.638 [2024-06-11 12:26:12.452147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ebfd0 00:31:59.638 [2024-06-11 12:26:12.452958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.638 [2024-06-11 12:26:12.452985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:59.638 [2024-06-11 12:26:12.463635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e5ec8 00:31:59.638 [2024-06-11 12:26:12.464478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.638 [2024-06-11 12:26:12.464496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:59.638 [2024-06-11 12:26:12.475083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eaab8 00:31:59.638 [2024-06-11 12:26:12.475320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.638 [2024-06-11 12:26:12.475336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:59.638 [2024-06-11 12:26:12.486523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eee38 00:31:59.638 [2024-06-11 12:26:12.486761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.638 [2024-06-11 12:26:12.486778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:59.638 [2024-06-11 12:26:12.498034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e6b70 00:31:59.638 [2024-06-11 12:26:12.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.498360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.509411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eaab8 00:31:59.639 [2024-06-11 12:26:12.509626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.509641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.520771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e84c0 00:31:59.639 [2024-06-11 12:26:12.520977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.520993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.532137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e6300 00:31:59.639 [2024-06-11 12:26:12.532347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.532362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.543643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eb760 00:31:59.639 [2024-06-11 12:26:12.543766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.543781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.554886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ed4e8 00:31:59.639 [2024-06-11 12:26:12.555011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.555035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.568597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ec408 00:31:59.639 [2024-06-11 12:26:12.570295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.570312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.577681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f46d0 00:31:59.639 [2024-06-11 12:26:12.577843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.577858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.590597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e95a0 00:31:59.639 [2024-06-11 12:26:12.591388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.591404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.601223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0350 00:31:59.639 [2024-06-11 12:26:12.601788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.601805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.612691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e99d8 00:31:59.639 [2024-06-11 12:26:12.614169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.614185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.623159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fe2e8 00:31:59.639 [2024-06-11 12:26:12.623930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.623946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.634605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2948 00:31:59.639 [2024-06-11 12:26:12.635561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.635576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.646014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e12d8 00:31:59.639 [2024-06-11 12:26:12.646916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.646932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.657423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2510 00:31:59.639 [2024-06-11 12:26:12.658409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.658425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.639 [2024-06-11 12:26:12.668801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e49b0 00:31:59.639 [2024-06-11 12:26:12.669650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.639 [2024-06-11 12:26:12.669668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.681389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e8d30 00:31:59.899 [2024-06-11 12:26:12.682155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.682171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.692104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0bc0 00:31:59.899 [2024-06-11 12:26:12.692177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.692191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.705778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0350 00:31:59.899 [2024-06-11 12:26:12.707290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.707306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.717132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e88f8 00:31:59.899 [2024-06-11 12:26:12.718832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.718848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.728450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fd640 00:31:59.899 [2024-06-11 12:26:12.729986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.730001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.739767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fb048 00:31:59.899 [2024-06-11 12:26:12.741429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.741446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.751110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e38d0 00:31:59.899 [2024-06-11 12:26:12.752595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.752610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.762482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e3498 00:31:59.899 [2024-06-11 12:26:12.764125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.764141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.773836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2d80 00:31:59.899 [2024-06-11 12:26:12.775327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.775344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.785178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1430 00:31:59.899 [2024-06-11 12:26:12.786641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.786657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.796555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eee38 00:31:59.899 [2024-06-11 12:26:12.798173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.798189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.807882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f6020 00:31:59.899 [2024-06-11 12:26:12.809340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.809356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.819220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ef6a8 00:31:59.899 [2024-06-11 12:26:12.820818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.820834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.830159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eaab8 00:31:59.899 [2024-06-11 12:26:12.831057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.831073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.841825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e5a90 00:31:59.899 [2024-06-11 12:26:12.843322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.843338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.853161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fe720 00:31:59.899 [2024-06-11 12:26:12.854643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.854662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.864531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e1f80 00:31:59.899 [2024-06-11 12:26:12.866040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.899 [2024-06-11 12:26:12.866056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.899 [2024-06-11 12:26:12.875906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eea00 00:31:59.900 [2024-06-11 12:26:12.877339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.900 [2024-06-11 12:26:12.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:59.900 [2024-06-11 12:26:12.887033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eee38 00:31:59.900 [2024-06-11 12:26:12.888138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.900 [2024-06-11 12:26:12.888154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:59.900 [2024-06-11 12:26:12.898713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e8d30 00:31:59.900 [2024-06-11 12:26:12.899738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.900 [2024-06-11 12:26:12.899754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:59.900 [2024-06-11 12:26:12.910081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f31b8 00:31:59.900 [2024-06-11 12:26:12.911311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.900 [2024-06-11 12:26:12.911327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:59.900 [2024-06-11 12:26:12.921495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f4f40 00:31:59.900 [2024-06-11 12:26:12.922878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.900 [2024-06-11 12:26:12.922894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:59.900 [2024-06-11 12:26:12.932894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e23b8 00:32:00.159 [2024-06-11 12:26:12.934138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:12.934153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:12.944278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e3498 00:32:00.159 [2024-06-11 12:26:12.945579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:12.945595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:12.955696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fb480 00:32:00.159 [2024-06-11 12:26:12.956560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:12.956576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:12.967253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e99d8 00:32:00.159 [2024-06-11 12:26:12.967841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:12.967857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:12.978647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e4de8 00:32:00.159 [2024-06-11 12:26:12.979238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:12.979253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:12.990001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fd640 00:32:00.159 [2024-06-11 12:26:12.990621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:12.990638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.001415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f6458 00:32:00.159 [2024-06-11 12:26:13.002059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.002076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.012786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fe720 00:32:00.159 [2024-06-11 12:26:13.013376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.013393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.024154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f81e0 00:32:00.159 [2024-06-11 12:26:13.024602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.024618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.037046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e99d8 00:32:00.159 [2024-06-11 12:26:13.038127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.038144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.046870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eb760 00:32:00.159 [2024-06-11 12:26:13.047445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.047461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.058201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fa7d8 00:32:00.159 [2024-06-11 12:26:13.058763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.058779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.069594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e9168 00:32:00.159 [2024-06-11 12:26:13.070152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.070169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.080926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f9f68 00:32:00.159 [2024-06-11 12:26:13.081463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.081479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.092301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eaef0 00:32:00.159 [2024-06-11 12:26:13.092789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.092804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.103668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1868 00:32:00.159 [2024-06-11 12:26:13.104128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.104144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.114981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f7970 00:32:00.159 [2024-06-11 12:26:13.115460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.115477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.126323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e6fa8 00:32:00.159 [2024-06-11 12:26:13.126649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.126666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.137656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fc560 00:32:00.159 [2024-06-11 12:26:13.138106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.159 [2024-06-11 12:26:13.138122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:00.159 [2024-06-11 12:26:13.149049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ed4e8 00:32:00.159 [2024-06-11 12:26:13.149416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.160 [2024-06-11 12:26:13.149435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:00.160 [2024-06-11 12:26:13.160406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eb760 00:32:00.160 [2024-06-11 12:26:13.160741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.160 [2024-06-11 12:26:13.160757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:00.160 [2024-06-11 12:26:13.171796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fb480 00:32:00.160 [2024-06-11 12:26:13.172137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.160 [2024-06-11 12:26:13.172153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:00.160 [2024-06-11 12:26:13.183148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f9b30 00:32:00.160 [2024-06-11 12:26:13.183514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.160 [2024-06-11 12:26:13.183530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.419 [2024-06-11 12:26:13.194520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190edd58 00:32:00.419 [2024-06-11 12:26:13.194829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.419 [2024-06-11 12:26:13.194846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.419 [2024-06-11 12:26:13.205999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fb480 00:32:00.420 [2024-06-11 12:26:13.206353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.206369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.218923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f6cc8 00:32:00.420 [2024-06-11 12:26:13.220229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.220244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.230108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e3498 00:32:00.420 [2024-06-11 12:26:13.231080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.231103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.241741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e27f0 00:32:00.420 [2024-06-11 12:26:13.243267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.243283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.252301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f4f40 00:32:00.420 [2024-06-11 12:26:13.253106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.253122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.264186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1868 00:32:00.420 [2024-06-11 12:26:13.265754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.265770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.275579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ebfd0 00:32:00.420 [2024-06-11 12:26:13.276971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.276987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.285011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e4578 00:32:00.420 [2024-06-11 12:26:13.285155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.285170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.296402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f8e88 00:32:00.420 [2024-06-11 12:26:13.296630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.296645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.307774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e0630 00:32:00.420 [2024-06-11 12:26:13.307994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.308009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.319271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e0630 00:32:00.420 [2024-06-11 12:26:13.319530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.319546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.330509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f81e0 00:32:00.420 [2024-06-11 12:26:13.331345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.331361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.341858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f3a28 00:32:00.420 [2024-06-11 12:26:13.342700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.342716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.355916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2948 00:32:00.420 [2024-06-11 12:26:13.356904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.356920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.367227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2d80 00:32:00.420 [2024-06-11 12:26:13.368665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.368681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.377801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e12d8 00:32:00.420 [2024-06-11 12:26:13.378610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.378625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.387813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f7100 00:32:00.420 [2024-06-11 12:26:13.388731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.388747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.399211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fe720 00:32:00.420 [2024-06-11 12:26:13.400239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.400255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.410394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e99d8 00:32:00.420 [2024-06-11 12:26:13.411300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.411316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.421715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fc998 00:32:00.420 [2024-06-11 12:26:13.422460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.422476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.433060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f4b08 00:32:00.420 [2024-06-11 12:26:13.433940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.433956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:00.420 [2024-06-11 12:26:13.444683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ebfd0 00:32:00.420 [2024-06-11 12:26:13.445236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.420 [2024-06-11 12:26:13.445255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.456074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f6cc8 00:32:00.681 [2024-06-11 12:26:13.456854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.456871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.467479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e73e0 00:32:00.681 [2024-06-11 12:26:13.468459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.468475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.478869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0ff8 00:32:00.681 [2024-06-11 12:26:13.479667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.479683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.490265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e4de8 00:32:00.681 [2024-06-11 12:26:13.491149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.491165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.501674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f4f40 00:32:00.681 [2024-06-11 12:26:13.502558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.502574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.513220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f57b0 00:32:00.681 [2024-06-11 12:26:13.513883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.513899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.524570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ecc78 00:32:00.681 [2024-06-11 12:26:13.524686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.524701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.535895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fdeb0 00:32:00.681 [2024-06-11 12:26:13.536045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.536061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.549498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ebb98 00:32:00.681 [2024-06-11 12:26:13.551179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.551195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.560882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f57b0 00:32:00.681 [2024-06-11 12:26:13.562617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.562633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.572320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2d80 00:32:00.681 [2024-06-11 12:26:13.573925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.573941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.583632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ef6a8 00:32:00.681 [2024-06-11 12:26:13.585104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.585120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.595036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f3e60 00:32:00.681 [2024-06-11 12:26:13.596521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.596537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.606397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1ca0 00:32:00.681 [2024-06-11 12:26:13.607923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.607939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.617444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e23b8 00:32:00.681 [2024-06-11 12:26:13.618537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.618553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.628493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f4f40 00:32:00.681 [2024-06-11 12:26:13.629275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.629291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.640471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fa3a0 00:32:00.681 [2024-06-11 12:26:13.641974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.641990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.651858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1868 00:32:00.681 [2024-06-11 12:26:13.653391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.681 [2024-06-11 12:26:13.653407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:00.681 [2024-06-11 12:26:13.663282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190dece0 00:32:00.682 [2024-06-11 12:26:13.664678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.682 [2024-06-11 12:26:13.664694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.682 [2024-06-11 12:26:13.674712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f8e88 00:32:00.682 [2024-06-11 12:26:13.676147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.682 [2024-06-11 12:26:13.676163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.682 [2024-06-11 12:26:13.684217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fd208 00:32:00.682 [2024-06-11 12:26:13.684471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.682 [2024-06-11 12:26:13.684487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:00.682 [2024-06-11 12:26:13.695803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ec840 00:32:00.682 [2024-06-11 12:26:13.696034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.682 [2024-06-11 12:26:13.696050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:00.682 [2024-06-11 12:26:13.709167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1868 00:32:00.682 [2024-06-11 12:26:13.710782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.682 [2024-06-11 12:26:13.710798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.720151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f7da8 00:32:00.943 [2024-06-11 12:26:13.721221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.721237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.731898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e9e10 00:32:00.943 [2024-06-11 12:26:13.733434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.733450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.743306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2510 00:32:00.943 [2024-06-11 12:26:13.744795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.744814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.754654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ecc78 00:32:00.943 [2024-06-11 12:26:13.756032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.756048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.765982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190df550 00:32:00.943 [2024-06-11 12:26:13.767502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.767518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.777357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ee5c8 00:32:00.943 [2024-06-11 12:26:13.778886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.778902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.788721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e27f0 00:32:00.943 [2024-06-11 12:26:13.790219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.790235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.800104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eff18 00:32:00.943 [2024-06-11 12:26:13.801433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.801449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.811428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e5a90 00:32:00.943 [2024-06-11 12:26:13.812900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.812916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.822770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ecc78 00:32:00.943 [2024-06-11 12:26:13.824085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.824101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.834004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e3d08 00:32:00.943 [2024-06-11 12:26:13.835204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.835220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.845364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:00.943 [2024-06-11 12:26:13.846561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.846577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.856736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eaef0 00:32:00.943 [2024-06-11 12:26:13.858069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.858085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.868114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fdeb0 00:32:00.943 [2024-06-11 12:26:13.869280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.869296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.879777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f4298 00:32:00.943 [2024-06-11 12:26:13.880884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.880900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.891199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e1b48 00:32:00.943 [2024-06-11 12:26:13.891999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.892015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.902366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0788 00:32:00.943 [2024-06-11 12:26:13.902835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.902851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.915225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fb480 00:32:00.943 [2024-06-11 12:26:13.916320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.916335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.925082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ed0b0 00:32:00.943 [2024-06-11 12:26:13.926229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.926245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.936500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1868 00:32:00.943 [2024-06-11 12:26:13.937506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.937522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.948023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f57b0 00:32:00.943 [2024-06-11 12:26:13.949040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.949057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.960958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fbcf0 00:32:00.943 [2024-06-11 12:26:13.962170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.962185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:00.943 [2024-06-11 12:26:13.971223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eff18 00:32:00.943 [2024-06-11 12:26:13.971887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.943 [2024-06-11 12:26:13.971903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:13.981631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f57b0 00:32:01.205 [2024-06-11 12:26:13.982203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:13.982218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:13.993386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f3e60 00:32:01.205 [2024-06-11 12:26:13.994407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:13.994423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.004788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e1710 00:32:01.205 [2024-06-11 12:26:14.005863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.005879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.016171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ed0b0 00:32:01.205 [2024-06-11 12:26:14.017203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.017218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.027510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e1b48 00:32:01.205 [2024-06-11 12:26:14.028529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.028545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.038896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2510 00:32:01.205 [2024-06-11 12:26:14.039881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.039898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.050211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fb480 00:32:01.205 [2024-06-11 12:26:14.051359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.051375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.061317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2d80 00:32:01.205 [2024-06-11 12:26:14.062219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.062235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.072329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f9b30 00:32:01.205 [2024-06-11 12:26:14.072626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.072641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.084497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2d80 00:32:01.205 [2024-06-11 12:26:14.085345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.085361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.097474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e0ea0 00:32:01.205 [2024-06-11 12:26:14.098441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.098456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.108769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f7da8 00:32:01.205 [2024-06-11 12:26:14.109832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.205 [2024-06-11 12:26:14.109849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:01.205 [2024-06-11 12:26:14.120438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e3060 00:32:01.205 [2024-06-11 12:26:14.122099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.122115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.131769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f9b30 00:32:01.206 [2024-06-11 12:26:14.133221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.133237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.143131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e4de8 00:32:01.206 [2024-06-11 12:26:14.144604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.144623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.154463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fcdd0 00:32:01.206 [2024-06-11 12:26:14.155912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.155928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.165837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f7da8 00:32:01.206 [2024-06-11 12:26:14.167276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.167292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.177171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0350 00:32:01.206 [2024-06-11 12:26:14.178767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.178784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.188159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fc128 00:32:01.206 [2024-06-11 12:26:14.189243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.189259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.199087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e38d0 00:32:01.206 [2024-06-11 12:26:14.199741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.199757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.210022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190eff18 00:32:01.206 [2024-06-11 12:26:14.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.210535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.222113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fac10 00:32:01.206 [2024-06-11 12:26:14.222662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.222678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:01.206 [2024-06-11 12:26:14.234074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e6300 00:32:01.206 [2024-06-11 12:26:14.235154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.206 [2024-06-11 12:26:14.235170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.245037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2948 00:32:01.467 [2024-06-11 12:26:14.246328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.246344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.256343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0788 00:32:01.467 [2024-06-11 12:26:14.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.257455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.268253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f6890 00:32:01.467 [2024-06-11 12:26:14.269112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.269129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.279265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f6020 00:32:01.467 [2024-06-11 12:26:14.280556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.280572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.290609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e9e10 00:32:01.467 [2024-06-11 12:26:14.291701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.291717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.302489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f4f40 00:32:01.467 [2024-06-11 12:26:14.303251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.303267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.313573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f1ca0 00:32:01.467 [2024-06-11 12:26:14.314804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.324976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f5378 00:32:01.467 [2024-06-11 12:26:14.326360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.326376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.336366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f2948 00:32:01.467 [2024-06-11 12:26:14.337715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.337730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.347706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:01.467 [2024-06-11 12:26:14.349092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.467 [2024-06-11 12:26:14.349109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:01.467 [2024-06-11 12:26:14.359038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f8a50 00:32:01.467 [2024-06-11 12:26:14.360259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.360275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:01.468 [2024-06-11 12:26:14.370362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f0788 00:32:01.468 [2024-06-11 12:26:14.371565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.371581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.468 [2024-06-11 12:26:14.381709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ea680 00:32:01.468 [2024-06-11 12:26:14.382911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.382927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.468 [2024-06-11 12:26:14.393086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fdeb0 00:32:01.468 [2024-06-11 12:26:14.394284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.394300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.468 [2024-06-11 12:26:14.404731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190e2c28 00:32:01.468 [2024-06-11 12:26:14.405817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.405833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:01.468 [2024-06-11 12:26:14.416166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ebb98 00:32:01.468 [2024-06-11 12:26:14.416709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.416725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:01.468 [2024-06-11 12:26:14.427557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190f57b0 00:32:01.468 [2024-06-11 12:26:14.427974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.427990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:01.468 [2024-06-11 12:26:14.438961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190ee190 00:32:01.468 [2024-06-11 12:26:14.439491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.468 [2024-06-11 12:26:14.439510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:01.468 00:32:01.468 Latency(us) 00:32:01.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.468 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.468 nvme0n1 : 2.00 22319.13 87.18 0.00 0.00 5730.82 3440.64 13653.33 00:32:01.468 =================================================================================================================== 00:32:01.468 Total : 22319.13 87.18 0.00 0.00 5730.82 3440.64 13653.33 00:32:01.468 0 00:32:01.468 12:26:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:01.468 12:26:14 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:01.468 12:26:14 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:01.468 | .driver_specific 00:32:01.468 | .nvme_error 00:32:01.468 | .status_code 00:32:01.468 | .command_transient_transport_error' 00:32:01.468 12:26:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:01.729 12:26:14 -- host/digest.sh@71 -- # (( 175 > 0 )) 00:32:01.729 12:26:14 -- host/digest.sh@73 -- # killprocess 1691500 00:32:01.729 12:26:14 -- common/autotest_common.sh@926 -- # '[' -z 1691500 ']' 00:32:01.729 12:26:14 -- common/autotest_common.sh@930 -- # kill -0 1691500 00:32:01.729 12:26:14 -- common/autotest_common.sh@931 -- # uname 00:32:01.729 12:26:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:01.729 12:26:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1691500 00:32:01.729 12:26:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:01.729 12:26:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:01.729 12:26:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1691500' 00:32:01.729 killing process with pid 1691500 00:32:01.729 12:26:14 -- common/autotest_common.sh@945 -- # kill 1691500 00:32:01.729 Received shutdown signal, test time was about 2.000000 seconds 00:32:01.729 00:32:01.729 Latency(us) 00:32:01.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.729 =================================================================================================================== 00:32:01.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:01.729 12:26:14 -- common/autotest_common.sh@950 -- # wait 1691500 00:32:01.989 12:26:14 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:01.989 12:26:14 -- host/digest.sh@54 -- # local rw bs qd 00:32:01.989 12:26:14 -- host/digest.sh@56 -- # rw=randwrite 00:32:01.989 12:26:14 -- host/digest.sh@56 -- # bs=131072 00:32:01.989 12:26:14 -- host/digest.sh@56 -- # qd=16 00:32:01.989 12:26:14 -- host/digest.sh@58 -- # bperfpid=1692193 00:32:01.989 12:26:14 -- host/digest.sh@60 -- # waitforlisten 1692193 /var/tmp/bperf.sock 00:32:01.989 12:26:14 -- common/autotest_common.sh@819 -- # '[' -z 1692193 ']' 00:32:01.989 12:26:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:01.989 12:26:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:01.989 12:26:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:01.989 12:26:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:01.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:01.989 12:26:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:01.989 12:26:14 -- common/autotest_common.sh@10 -- # set +x 00:32:01.989 [2024-06-11 12:26:14.831735] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:01.989 [2024-06-11 12:26:14.831785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692193 ] 00:32:01.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:01.989 Zero copy mechanism will not be used. 00:32:01.989 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.989 [2024-06-11 12:26:14.906239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.989 [2024-06-11 12:26:14.931373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.559 12:26:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:02.559 12:26:15 -- common/autotest_common.sh@852 -- # return 0 00:32:02.559 12:26:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:02.559 12:26:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:02.819 12:26:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:02.819 12:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.819 12:26:15 -- common/autotest_common.sh@10 -- # set +x 00:32:02.819 12:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.819 12:26:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.819 12:26:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:03.389 nvme0n1 00:32:03.389 12:26:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:03.389 12:26:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.389 12:26:16 -- common/autotest_common.sh@10 -- # set +x 00:32:03.389 12:26:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.389 12:26:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:03.389 12:26:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:03.389 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:03.389 Zero copy mechanism will not be used. 00:32:03.389 Running I/O for 2 seconds... 00:32:03.390 [2024-06-11 12:26:16.284524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.284711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.284740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.291805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.292128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.292148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.300139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.300270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.300288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.305447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.305543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.305559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.310383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.310458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.310473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.313848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.313934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.313949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.317200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.317285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.317301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.320675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.320740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.320756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.324183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.324335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.324351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.327648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.327734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.327749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.330930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.331048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.331064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.334158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.334237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.337343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.337416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.337434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.340510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.340578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.340594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.343717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.343785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.343800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.349106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.349385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.349403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.354730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.355024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.355041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.358457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.358830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.358847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.363953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.364067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.364083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.368445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.368540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.368555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.372177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.372252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.372267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.380185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.380512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.380528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.386571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.386853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.386869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.395079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.395197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.395212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.404466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.404757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.404774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.390 [2024-06-11 12:26:16.414430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.390 [2024-06-11 12:26:16.414634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.390 [2024-06-11 12:26:16.414649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.424515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.424766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.424783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.435589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.435893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.435910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.446000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.446217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.446233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.455126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.455291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.455307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.463993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.464078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.464094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.473091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.473338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.473353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.483326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.483558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.483573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.493766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.494006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.494027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.504251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.504620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.504636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.514243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.514558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.514574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.523140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.523441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.523457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.533274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.533499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.533514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.543522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.543732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.543750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.548684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.548907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.548922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.557653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.557893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.557909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.564286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.564360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.564375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.569299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.569379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.569394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.573641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.573865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.573881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.577597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.577680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.577694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.580789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.580857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.653 [2024-06-11 12:26:16.580871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.653 [2024-06-11 12:26:16.583987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.653 [2024-06-11 12:26:16.584058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.584073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.587643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.587728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.587743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.591048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.591123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.591138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.594297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.594386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.594401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.597489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.597560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.597575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.600741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.600865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.600881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.605843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.605925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.605939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.609713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.609786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.609801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.613428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.613534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.613550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.616917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.617011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.620234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.620391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.620407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.623424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.623565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.623580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.626658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.626781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.626796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.629859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.629940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.629955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.633027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.633099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.633114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.636200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.636265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.636279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.639447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.639556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.639572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.642635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.642701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.642716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.645948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.646105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.646129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.649444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.649564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.649580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.652653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.652776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.652791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.660079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.660376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.660393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.663543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.663649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.663665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.666777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.666838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.666853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.673533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.673733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.673748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.677335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.677410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.654 [2024-06-11 12:26:16.677424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.654 [2024-06-11 12:26:16.680641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.654 [2024-06-11 12:26:16.680713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.655 [2024-06-11 12:26:16.680728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.655 [2024-06-11 12:26:16.683938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.655 [2024-06-11 12:26:16.684061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.655 [2024-06-11 12:26:16.684077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.916 [2024-06-11 12:26:16.687451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.916 [2024-06-11 12:26:16.687552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.916 [2024-06-11 12:26:16.687568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.916 [2024-06-11 12:26:16.692495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.916 [2024-06-11 12:26:16.692803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.916 [2024-06-11 12:26:16.692819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.916 [2024-06-11 12:26:16.696722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.916 [2024-06-11 12:26:16.696848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.916 [2024-06-11 12:26:16.696864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.701190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.701260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.701275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.707296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.707367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.707382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.714073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.714284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.714299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.720716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.720797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.720813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.725053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.725141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.725156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.728396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.728479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.731597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.731668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.731682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.734943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.735084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.735100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.738136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.738208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.738223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.741353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.741433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.741448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.746147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.746428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.746444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.752970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.753077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.753093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.756599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.756736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.756751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.760374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.760569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.760587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.769196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.769291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.769306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.779381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.779653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.779670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.789384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.789649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.789666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.799919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.800187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.800203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.810219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.810459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.820635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.820888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.820904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.831302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.831385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.831400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.842218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.842514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.842530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.851573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.851654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.851669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.862112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.862385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.862401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.871551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.871875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.871892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.881649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.881916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.917 [2024-06-11 12:26:16.881932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.917 [2024-06-11 12:26:16.892319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.917 [2024-06-11 12:26:16.892397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.918 [2024-06-11 12:26:16.892412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.918 [2024-06-11 12:26:16.903073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.918 [2024-06-11 12:26:16.903342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.918 [2024-06-11 12:26:16.903359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.918 [2024-06-11 12:26:16.913511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.918 [2024-06-11 12:26:16.913815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.918 [2024-06-11 12:26:16.913831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.918 [2024-06-11 12:26:16.924522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.918 [2024-06-11 12:26:16.924797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.918 [2024-06-11 12:26:16.924813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.918 [2024-06-11 12:26:16.934678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.918 [2024-06-11 12:26:16.934811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.918 [2024-06-11 12:26:16.934826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.918 [2024-06-11 12:26:16.945459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:03.918 [2024-06-11 12:26:16.945700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.918 [2024-06-11 12:26:16.945715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:16.955155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:16.955225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:16.955240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:16.963736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:16.963807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:16.963822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:16.971706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:16.972042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:16.972058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:16.980898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:16.981156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:16.981171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:16.989484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:16.989584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:16.989599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:16.997199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:16.997473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:16.997490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.006437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.006570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.006585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.015634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.015760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.019943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.020024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.020040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.026867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.026938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.026953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.034792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.035068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.035083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.043714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.043785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.043800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.052229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.052299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.052314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.062500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.062640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.062655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.073138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.073203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.073218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.083331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.083591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.083606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.094122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.094200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.094215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.104498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.104561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.104576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.114154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.114370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.114385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.124249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.124370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.124385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.132905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.179 [2024-06-11 12:26:17.133115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.179 [2024-06-11 12:26:17.133131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.179 [2024-06-11 12:26:17.142053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.142131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.142146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.149489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.149728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.149743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.157826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.158126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.158143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.165698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.165993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.166009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.174226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.174300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.174315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.181891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.182136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.182151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.190095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.190349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.190364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.199599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.199665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.199679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.180 [2024-06-11 12:26:17.207946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.180 [2024-06-11 12:26:17.208095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.180 [2024-06-11 12:26:17.208110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.215544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.215613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.215628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.224099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.224348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.224365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.232078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.232315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.232331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.239996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.240073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.240091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.248163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.248413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.248429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.256668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.256724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.256739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.264071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.264155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.264170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.267545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.267654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.267670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.270991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.271103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.442 [2024-06-11 12:26:17.271119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.442 [2024-06-11 12:26:17.274208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.442 [2024-06-11 12:26:17.274284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.274299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.277494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.277620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.277636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.280734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.280805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.280820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.283951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.284059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.284075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.287210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.287338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.287354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.290375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.290452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.290467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.293732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.293849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.293864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.300617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.300863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.300879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.306773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.306850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.306865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.310364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.310454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.310469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.318346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.318599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.318615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.327856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.328130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.328147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.338080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.338352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.338369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.345609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.345890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.349006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.349104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.349119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.352600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.352656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.352670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.356220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.356312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.356327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.359498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.359624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.359639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.362877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.362952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.362968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.366234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.366368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.366383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.369501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.369606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.369624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.372881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.372958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.372973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.376102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.376229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.376243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.379529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.379634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.379650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.383888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.383978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.383994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.387193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.387339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.387354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.390363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.443 [2024-06-11 12:26:17.390441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.443 [2024-06-11 12:26:17.390456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.443 [2024-06-11 12:26:17.393566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.393645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.393660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.396827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.396925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.396940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.400066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.400149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.400164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.403282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.403361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.403377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.406500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.406610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.406626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.409703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.409809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.409824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.413496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.413835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.413851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.417076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.417340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.417356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.420403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.420518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.420534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.425089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.425401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.425418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.431090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.431161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.431176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.435809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.435891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.435907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.439345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.439455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.439471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.442543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.442625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.442641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.445800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.445872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.445887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.449050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.449126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.449141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.452276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.452389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.452405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.455428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.455506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.455521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.458609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.458707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.458721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.461784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.461856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.461873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.465760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.466047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.466064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.469450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.469732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.469748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.444 [2024-06-11 12:26:17.473293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.444 [2024-06-11 12:26:17.473661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.444 [2024-06-11 12:26:17.473678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.706 [2024-06-11 12:26:17.477485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.706 [2024-06-11 12:26:17.477571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.706 [2024-06-11 12:26:17.477586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.706 [2024-06-11 12:26:17.481608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.706 [2024-06-11 12:26:17.481707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.706 [2024-06-11 12:26:17.481723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.706 [2024-06-11 12:26:17.484930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.485006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.485026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.488145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.488219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.488234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.491349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.491419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.491435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.494559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.494673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.494689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.498373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.498445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.498460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.501619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.501697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.501712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.504897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.505038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.505055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.508495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.508650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.508665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.513655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.513943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.513959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.523292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.523569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.523585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.533131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.533381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.533396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.543283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.543631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.543647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.553339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.553495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.553510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.563655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.564008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.564029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.574109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.574322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.574338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.584771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.584995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.585010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.595576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.595849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.595866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.606299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.606553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.606569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.616359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.616607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.616623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.627121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.627194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.627209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.635681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.635984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.636007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.639302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.639386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.639401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.642550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.642647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.642663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.645763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.645844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.645859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.707 [2024-06-11 12:26:17.649043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.707 [2024-06-11 12:26:17.649155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.707 [2024-06-11 12:26:17.649169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.652329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.652459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.652475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.655572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.655695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.655711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.658755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.658834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.658848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.661967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.662070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.662085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.665130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.665214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.665228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.668317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.668413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.668429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.671491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.671564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.671579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.674722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.674830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.674846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.678088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.678382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.678398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.683300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.683557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.683573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.689199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.689488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.689504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.692711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.692796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.692811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.697878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.698075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.698091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.705145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.705445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.705462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.712349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.712439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.712454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.716200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.716354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.716369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.719582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.719657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.719672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.722934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.723004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.723024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.729569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.729660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.729675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.732802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.732868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.732883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.736242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.736337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.736352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.708 [2024-06-11 12:26:17.739763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.708 [2024-06-11 12:26:17.739831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.708 [2024-06-11 12:26:17.739849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.747174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.747388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.747404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.754625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.754838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.754853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.763227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.763308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.763324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.771419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.771687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.771703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.780285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.780424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.780440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.788112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.788175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.788190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.795547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.795809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.795825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.803322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.803403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.803419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.811595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.811676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.811691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.819453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.819682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.819697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.827515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.827777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.827793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.835329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.835520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.835535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.842577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.842779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.842794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.851228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.851451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.851467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.859448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.859680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.859697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.868607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.868677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.868692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.876345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.876459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.876473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.886514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.886732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.886747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.894456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.894705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.894721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.901805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.902037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.902052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.909496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.909724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.909739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.917156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.917472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.917489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.925583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.925852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.925868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.932942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.933068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.970 [2024-06-11 12:26:17.933083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.970 [2024-06-11 12:26:17.939926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.970 [2024-06-11 12:26:17.940227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.940244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.947949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.948090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.948108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.956373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.956639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.956656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.966677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.966936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.966952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.973614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.973909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.973925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.981959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.982064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.982080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.986613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.986691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.986707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.990850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.990988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.991004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.994088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.994177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.994191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:17.997400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:17.997547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:17.997563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.971 [2024-06-11 12:26:18.000640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:04.971 [2024-06-11 12:26:18.000741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:04.971 [2024-06-11 12:26:18.000757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.003820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.003891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.003905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.007352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.007468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.007483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.012464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.012644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.012660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.018120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.018196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.018210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.023879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.023998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.024013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.027606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.027692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.027707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.032054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.032323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.032339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.038010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.038290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.038306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.041652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.041927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.041943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.047923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.048142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.048157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.055118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.055358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.055374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.060574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.060830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.060846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.064353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.064454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.064469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.068246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.068398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.068413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.072505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.072722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.072737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.078154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.078230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.078245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.083157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.083261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.083279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.087034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.087116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.087131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.090583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.090685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.090700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.095171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.095378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.095393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.103717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.104006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.104027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.111990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.112314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.112330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.119370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.119536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.119551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.232 [2024-06-11 12:26:18.128763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.232 [2024-06-11 12:26:18.129006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.232 [2024-06-11 12:26:18.129026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.135747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.135822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.135836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.144870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.145099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.145114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.152205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.152259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.152274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.158869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.159102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.159117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.165518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.165879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.165896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.175282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.175561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.175578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.183388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.183684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.183700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.191004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.191236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.191251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.200111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.200490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.200507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.207403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.207663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.207680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.216502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.216569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.216584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.223915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.224181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.224197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.232264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.232549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.232565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.239652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.239895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.239912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.248421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.248508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.248523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.252617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.252899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.252915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.233 [2024-06-11 12:26:18.259552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.233 [2024-06-11 12:26:18.259650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.233 [2024-06-11 12:26:18.259667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.493 [2024-06-11 12:26:18.267883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.493 [2024-06-11 12:26:18.267948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.493 [2024-06-11 12:26:18.267963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:05.493 [2024-06-11 12:26:18.275883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19654f0) with pdu=0x2000190fef90 00:32:05.493 [2024-06-11 12:26:18.276098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.493 [2024-06-11 12:26:18.276116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:05.493 00:32:05.493 Latency(us) 00:32:05.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.493 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:05.493 nvme0n1 : 2.00 5029.58 628.70 0.00 0.00 3175.03 1378.99 11741.87 00:32:05.493 =================================================================================================================== 00:32:05.493 Total : 5029.58 628.70 0.00 0.00 3175.03 1378.99 11741.87 00:32:05.493 0 00:32:05.493 12:26:18 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:05.493 12:26:18 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:05.493 12:26:18 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:05.493 | .driver_specific 00:32:05.493 | .nvme_error 00:32:05.493 | .status_code 00:32:05.493 | .command_transient_transport_error' 00:32:05.493 12:26:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:05.493 12:26:18 -- host/digest.sh@71 -- # (( 325 > 0 )) 00:32:05.493 12:26:18 -- host/digest.sh@73 -- # killprocess 1692193 00:32:05.493 12:26:18 -- common/autotest_common.sh@926 -- # '[' -z 1692193 ']' 00:32:05.493 12:26:18 -- common/autotest_common.sh@930 -- # kill -0 1692193 00:32:05.493 12:26:18 -- common/autotest_common.sh@931 -- # uname 00:32:05.493 12:26:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.493 12:26:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1692193 00:32:05.753 12:26:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:05.753 12:26:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:05.753 12:26:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1692193' 00:32:05.753 killing process with pid 1692193 00:32:05.753 12:26:18 -- common/autotest_common.sh@945 -- # kill 1692193 00:32:05.753 Received shutdown signal, test time was about 2.000000 seconds 00:32:05.753 00:32:05.753 Latency(us) 00:32:05.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.753 =================================================================================================================== 00:32:05.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.753 12:26:18 -- common/autotest_common.sh@950 -- # wait 1692193 00:32:05.753 12:26:18 -- host/digest.sh@115 -- # killprocess 1689948 00:32:05.753 12:26:18 -- common/autotest_common.sh@926 -- # '[' -z 1689948 ']' 00:32:05.753 12:26:18 -- common/autotest_common.sh@930 -- # kill -0 1689948 00:32:05.753 12:26:18 -- common/autotest_common.sh@931 -- # uname 00:32:05.753 12:26:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.753 12:26:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1689948 00:32:05.753 12:26:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:05.753 12:26:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:05.753 12:26:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1689948' 00:32:05.753 killing process with pid 1689948 00:32:05.753 12:26:18 -- common/autotest_common.sh@945 -- # kill 1689948 00:32:05.753 12:26:18 -- common/autotest_common.sh@950 -- # wait 1689948 00:32:06.013 00:32:06.013 real 0m15.373s 00:32:06.013 user 0m30.519s 00:32:06.013 sys 0m3.477s 00:32:06.013 12:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.013 12:26:18 -- common/autotest_common.sh@10 -- # set +x 00:32:06.013 ************************************ 00:32:06.013 END TEST nvmf_digest_error 00:32:06.013 ************************************ 00:32:06.013 12:26:18 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:06.013 12:26:18 -- host/digest.sh@139 -- # nvmftestfini 00:32:06.013 12:26:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:06.013 12:26:18 -- nvmf/common.sh@116 -- # sync 00:32:06.013 12:26:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:06.013 12:26:18 -- nvmf/common.sh@119 -- # set +e 00:32:06.013 12:26:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:06.013 12:26:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:06.013 rmmod nvme_tcp 00:32:06.013 rmmod nvme_fabrics 00:32:06.013 rmmod nvme_keyring 00:32:06.013 12:26:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:06.013 12:26:18 -- nvmf/common.sh@123 -- # set -e 00:32:06.013 12:26:18 -- nvmf/common.sh@124 -- # return 0 00:32:06.013 12:26:18 -- nvmf/common.sh@477 -- # '[' -n 1689948 ']' 00:32:06.013 12:26:18 -- nvmf/common.sh@478 -- # killprocess 1689948 00:32:06.014 12:26:18 -- common/autotest_common.sh@926 -- # '[' -z 1689948 ']' 00:32:06.014 12:26:18 -- common/autotest_common.sh@930 -- # kill -0 1689948 00:32:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1689948) - No such process 00:32:06.014 12:26:18 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1689948 is not found' 00:32:06.014 Process with pid 1689948 is not found 00:32:06.014 12:26:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:06.014 12:26:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:06.014 12:26:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:06.014 12:26:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:06.014 12:26:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:06.014 12:26:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.014 12:26:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:06.014 12:26:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.555 12:26:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:08.555 00:32:08.555 real 0m41.072s 00:32:08.555 user 1m4.142s 00:32:08.555 sys 0m12.283s 00:32:08.555 12:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:08.555 12:26:20 -- common/autotest_common.sh@10 -- # set +x 00:32:08.555 ************************************ 00:32:08.555 END TEST nvmf_digest 00:32:08.555 ************************************ 00:32:08.555 12:26:21 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:32:08.555 12:26:21 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:32:08.555 12:26:21 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:32:08.555 12:26:21 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:08.555 12:26:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:08.555 12:26:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:08.555 12:26:21 -- common/autotest_common.sh@10 -- # set +x 00:32:08.555 ************************************ 00:32:08.555 START TEST nvmf_bdevperf 00:32:08.555 ************************************ 00:32:08.555 12:26:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:08.555 * Looking for test storage... 00:32:08.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:08.556 12:26:21 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:08.556 12:26:21 -- nvmf/common.sh@7 -- # uname -s 00:32:08.556 12:26:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.556 12:26:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.556 12:26:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.556 12:26:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.556 12:26:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.556 12:26:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.556 12:26:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.556 12:26:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.556 12:26:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.556 12:26:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.556 12:26:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:08.556 12:26:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:08.556 12:26:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.556 12:26:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.556 12:26:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:08.556 12:26:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:08.556 12:26:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.556 12:26:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.556 12:26:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.556 12:26:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.556 12:26:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.556 12:26:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.556 12:26:21 -- paths/export.sh@5 -- # export PATH 00:32:08.556 12:26:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.556 12:26:21 -- nvmf/common.sh@46 -- # : 0 00:32:08.556 12:26:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:08.556 12:26:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:08.556 12:26:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:08.556 12:26:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.556 12:26:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.556 12:26:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:08.556 12:26:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:08.556 12:26:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:08.556 12:26:21 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:08.556 12:26:21 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:08.556 12:26:21 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:08.556 12:26:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:08.556 12:26:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:08.556 12:26:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:08.556 12:26:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:08.556 12:26:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:08.556 12:26:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.556 12:26:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:08.556 12:26:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.556 12:26:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:08.556 12:26:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:08.556 12:26:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:08.556 12:26:21 -- common/autotest_common.sh@10 -- # set +x 00:32:15.141 12:26:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:15.141 12:26:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:15.141 12:26:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:15.141 12:26:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:15.141 12:26:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:15.141 12:26:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:15.141 12:26:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:15.141 12:26:28 -- nvmf/common.sh@294 -- # net_devs=() 00:32:15.141 12:26:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:15.141 12:26:28 -- nvmf/common.sh@295 -- # e810=() 00:32:15.141 12:26:28 -- nvmf/common.sh@295 -- # local -ga e810 00:32:15.141 12:26:28 -- nvmf/common.sh@296 -- # x722=() 00:32:15.141 12:26:28 -- nvmf/common.sh@296 -- # local -ga x722 00:32:15.141 12:26:28 -- nvmf/common.sh@297 -- # mlx=() 00:32:15.141 12:26:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:15.141 12:26:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.141 12:26:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:15.141 12:26:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:15.141 12:26:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:15.141 12:26:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:15.141 12:26:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:15.141 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:15.141 12:26:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:15.141 12:26:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:15.141 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:15.141 12:26:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:15.141 12:26:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:15.141 12:26:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.141 12:26:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:15.141 12:26:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.141 12:26:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:15.141 Found net devices under 0000:31:00.0: cvl_0_0 00:32:15.141 12:26:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.141 12:26:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:15.141 12:26:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.141 12:26:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:15.141 12:26:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.141 12:26:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:15.141 Found net devices under 0000:31:00.1: cvl_0_1 00:32:15.141 12:26:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.141 12:26:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:15.141 12:26:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:15.141 12:26:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:15.141 12:26:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:15.141 12:26:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.141 12:26:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.141 12:26:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.141 12:26:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:15.141 12:26:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.141 12:26:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.142 12:26:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:15.142 12:26:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.142 12:26:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.142 12:26:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:15.142 12:26:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:15.142 12:26:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.142 12:26:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.142 12:26:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.142 12:26:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.402 12:26:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:15.402 12:26:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.402 12:26:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.402 12:26:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.402 12:26:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:15.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:32:15.402 00:32:15.402 --- 10.0.0.2 ping statistics --- 00:32:15.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.402 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:32:15.402 12:26:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:32:15.402 00:32:15.402 --- 10.0.0.1 ping statistics --- 00:32:15.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.402 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:32:15.402 12:26:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.402 12:26:28 -- nvmf/common.sh@410 -- # return 0 00:32:15.402 12:26:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:15.402 12:26:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.402 12:26:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:15.402 12:26:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:15.402 12:26:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.402 12:26:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:15.402 12:26:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:15.402 12:26:28 -- host/bdevperf.sh@25 -- # tgt_init 00:32:15.402 12:26:28 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:15.402 12:26:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:15.402 12:26:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:15.402 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:32:15.402 12:26:28 -- nvmf/common.sh@469 -- # nvmfpid=1697078 00:32:15.402 12:26:28 -- nvmf/common.sh@470 -- # waitforlisten 1697078 00:32:15.402 12:26:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:15.402 12:26:28 -- common/autotest_common.sh@819 -- # '[' -z 1697078 ']' 00:32:15.402 12:26:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.402 12:26:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:15.402 12:26:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.402 12:26:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:15.402 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:32:15.402 [2024-06-11 12:26:28.416051] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:15.402 [2024-06-11 12:26:28.416115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.662 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.662 [2024-06-11 12:26:28.507203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:15.662 [2024-06-11 12:26:28.553301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:15.662 [2024-06-11 12:26:28.553473] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.662 [2024-06-11 12:26:28.553485] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.662 [2024-06-11 12:26:28.553495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.662 [2024-06-11 12:26:28.553640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.662 [2024-06-11 12:26:28.553800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.662 [2024-06-11 12:26:28.553801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:16.233 12:26:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:16.233 12:26:29 -- common/autotest_common.sh@852 -- # return 0 00:32:16.233 12:26:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:16.233 12:26:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:16.233 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:32:16.233 12:26:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.233 12:26:29 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:16.233 12:26:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.233 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:32:16.233 [2024-06-11 12:26:29.240982] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.233 12:26:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.233 12:26:29 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:16.233 12:26:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.233 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:32:16.493 Malloc0 00:32:16.493 12:26:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.493 12:26:29 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:16.493 12:26:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.493 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:32:16.493 12:26:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.493 12:26:29 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:16.493 12:26:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.493 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:32:16.493 12:26:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.493 12:26:29 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.493 12:26:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.493 12:26:29 -- common/autotest_common.sh@10 -- # set +x 00:32:16.493 [2024-06-11 12:26:29.311726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.493 12:26:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.493 12:26:29 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:16.493 12:26:29 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:16.493 12:26:29 -- nvmf/common.sh@520 -- # config=() 00:32:16.493 12:26:29 -- nvmf/common.sh@520 -- # local subsystem config 00:32:16.493 12:26:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:16.493 12:26:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:16.493 { 00:32:16.493 "params": { 00:32:16.493 "name": "Nvme$subsystem", 00:32:16.493 "trtype": "$TEST_TRANSPORT", 00:32:16.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:16.493 "adrfam": "ipv4", 00:32:16.493 "trsvcid": "$NVMF_PORT", 00:32:16.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:16.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:16.493 "hdgst": ${hdgst:-false}, 00:32:16.493 "ddgst": ${ddgst:-false} 00:32:16.493 }, 00:32:16.493 "method": "bdev_nvme_attach_controller" 00:32:16.493 } 00:32:16.493 EOF 00:32:16.493 )") 00:32:16.493 12:26:29 -- nvmf/common.sh@542 -- # cat 00:32:16.493 12:26:29 -- nvmf/common.sh@544 -- # jq . 00:32:16.493 12:26:29 -- nvmf/common.sh@545 -- # IFS=, 00:32:16.493 12:26:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:16.493 "params": { 00:32:16.493 "name": "Nvme1", 00:32:16.493 "trtype": "tcp", 00:32:16.493 "traddr": "10.0.0.2", 00:32:16.493 "adrfam": "ipv4", 00:32:16.493 "trsvcid": "4420", 00:32:16.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:16.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:16.493 "hdgst": false, 00:32:16.493 "ddgst": false 00:32:16.493 }, 00:32:16.493 "method": "bdev_nvme_attach_controller" 00:32:16.493 }' 00:32:16.493 [2024-06-11 12:26:29.372304] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:16.493 [2024-06-11 12:26:29.372352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697335 ] 00:32:16.493 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.493 [2024-06-11 12:26:29.431542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.493 [2024-06-11 12:26:29.460328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.754 Running I/O for 1 seconds... 00:32:17.693 00:32:17.693 Latency(us) 00:32:17.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.693 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:17.693 Verification LBA range: start 0x0 length 0x4000 00:32:17.693 Nvme1n1 : 1.01 13894.57 54.28 0.00 0.00 9169.95 815.79 13052.59 00:32:17.693 =================================================================================================================== 00:32:17.693 Total : 13894.57 54.28 0.00 0.00 9169.95 815.79 13052.59 00:32:17.693 12:26:30 -- host/bdevperf.sh@30 -- # bdevperfpid=1697673 00:32:17.693 12:26:30 -- host/bdevperf.sh@32 -- # sleep 3 00:32:17.693 12:26:30 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:17.693 12:26:30 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:17.693 12:26:30 -- nvmf/common.sh@520 -- # config=() 00:32:17.693 12:26:30 -- nvmf/common.sh@520 -- # local subsystem config 00:32:17.693 12:26:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:17.693 12:26:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:17.693 { 00:32:17.693 "params": { 00:32:17.693 "name": "Nvme$subsystem", 00:32:17.693 "trtype": "$TEST_TRANSPORT", 00:32:17.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.693 "adrfam": "ipv4", 00:32:17.693 "trsvcid": "$NVMF_PORT", 00:32:17.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.693 "hdgst": ${hdgst:-false}, 00:32:17.693 "ddgst": ${ddgst:-false} 00:32:17.693 }, 00:32:17.693 "method": "bdev_nvme_attach_controller" 00:32:17.693 } 00:32:17.693 EOF 00:32:17.693 )") 00:32:17.954 12:26:30 -- nvmf/common.sh@542 -- # cat 00:32:17.954 12:26:30 -- nvmf/common.sh@544 -- # jq . 00:32:17.954 12:26:30 -- nvmf/common.sh@545 -- # IFS=, 00:32:17.954 12:26:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:17.954 "params": { 00:32:17.954 "name": "Nvme1", 00:32:17.954 "trtype": "tcp", 00:32:17.954 "traddr": "10.0.0.2", 00:32:17.954 "adrfam": "ipv4", 00:32:17.954 "trsvcid": "4420", 00:32:17.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:17.954 "hdgst": false, 00:32:17.954 "ddgst": false 00:32:17.954 }, 00:32:17.954 "method": "bdev_nvme_attach_controller" 00:32:17.954 }' 00:32:17.954 [2024-06-11 12:26:30.768489] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:17.954 [2024-06-11 12:26:30.768540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697673 ] 00:32:17.954 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.954 [2024-06-11 12:26:30.827852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.954 [2024-06-11 12:26:30.855076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.214 Running I/O for 15 seconds... 00:32:20.767 12:26:33 -- host/bdevperf.sh@33 -- # kill -9 1697078 00:32:20.767 12:26:33 -- host/bdevperf.sh@35 -- # sleep 3 00:32:20.767 [2024-06-11 12:26:33.743812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.743855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.743876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.743885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.743900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.743909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.743921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.743930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.743943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.743953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.743964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.743974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.743985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.743996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.767 [2024-06-11 12:26:33.744178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.767 [2024-06-11 12:26:33.744191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.768 [2024-06-11 12:26:33.744820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.768 [2024-06-11 12:26:33.744837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.768 [2024-06-11 12:26:33.744849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.744857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.744867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.744875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.744884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.744892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.744904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.744919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.744929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.744940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.744950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.744964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.744982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.744989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.769 [2024-06-11 12:26:33.745534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.769 [2024-06-11 12:26:33.745592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.769 [2024-06-11 12:26:33.745600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.770 [2024-06-11 12:26:33.745895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.770 [2024-06-11 12:26:33.745930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.770 [2024-06-11 12:26:33.745965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.745991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.745998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.770 [2024-06-11 12:26:33.746015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.770 [2024-06-11 12:26:33.746036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.746052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.746068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.746086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.770 [2024-06-11 12:26:33.746102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.746118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.770 [2024-06-11 12:26:33.746127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.770 [2024-06-11 12:26:33.746134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.771 [2024-06-11 12:26:33.746185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.771 [2024-06-11 12:26:33.746201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.771 [2024-06-11 12:26:33.746217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:20.771 [2024-06-11 12:26:33.746233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.771 [2024-06-11 12:26:33.746348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb560 is same with the state(5) to be set 00:32:20.771 [2024-06-11 12:26:33.746366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:20.771 [2024-06-11 12:26:33.746372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:20.771 [2024-06-11 12:26:33.746378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114664 len:8 PRP1 0x0 PRP2 0x0 00:32:20.771 [2024-06-11 12:26:33.746386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:20.771 [2024-06-11 12:26:33.746421] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xefb560 was disconnected and freed. reset controller. 00:32:20.771 [2024-06-11 12:26:33.748655] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.771 [2024-06-11 12:26:33.748700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:20.771 [2024-06-11 12:26:33.749493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.749862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.749875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:20.771 [2024-06-11 12:26:33.749886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:20.771 [2024-06-11 12:26:33.750061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:20.771 [2024-06-11 12:26:33.750266] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.771 [2024-06-11 12:26:33.750275] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.771 [2024-06-11 12:26:33.750283] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.771 [2024-06-11 12:26:33.752488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.771 [2024-06-11 12:26:33.761261] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.771 [2024-06-11 12:26:33.761763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.762252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.762290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:20.771 [2024-06-11 12:26:33.762301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:20.771 [2024-06-11 12:26:33.762485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:20.771 [2024-06-11 12:26:33.762578] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.771 [2024-06-11 12:26:33.762587] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.771 [2024-06-11 12:26:33.762595] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.771 [2024-06-11 12:26:33.764746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.771 [2024-06-11 12:26:33.773661] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.771 [2024-06-11 12:26:33.774227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.774595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.774609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:20.771 [2024-06-11 12:26:33.774618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:20.771 [2024-06-11 12:26:33.774842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:20.771 [2024-06-11 12:26:33.775010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.771 [2024-06-11 12:26:33.775027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.771 [2024-06-11 12:26:33.775035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.771 [2024-06-11 12:26:33.777440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.771 [2024-06-11 12:26:33.786214] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:20.771 [2024-06-11 12:26:33.786724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.786903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.771 [2024-06-11 12:26:33.786914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:20.771 [2024-06-11 12:26:33.786921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:20.771 [2024-06-11 12:26:33.787053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:20.771 [2024-06-11 12:26:33.787180] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:20.772 [2024-06-11 12:26:33.787189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:20.772 [2024-06-11 12:26:33.787196] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:20.772 [2024-06-11 12:26:33.789450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:20.772 [2024-06-11 12:26:33.798773] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.035 [2024-06-11 12:26:33.799233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.799549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.799560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.035 [2024-06-11 12:26:33.799567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.035 [2024-06-11 12:26:33.799674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.035 [2024-06-11 12:26:33.799781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.035 [2024-06-11 12:26:33.799790] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.035 [2024-06-11 12:26:33.799797] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.035 [2024-06-11 12:26:33.802089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.035 [2024-06-11 12:26:33.811172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.035 [2024-06-11 12:26:33.811614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.811909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.811920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.035 [2024-06-11 12:26:33.811927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.035 [2024-06-11 12:26:33.812076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.035 [2024-06-11 12:26:33.812188] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.035 [2024-06-11 12:26:33.812196] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.035 [2024-06-11 12:26:33.812203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.035 [2024-06-11 12:26:33.814599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.035 [2024-06-11 12:26:33.823701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.035 [2024-06-11 12:26:33.824126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.824441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.824451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.035 [2024-06-11 12:26:33.824459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.035 [2024-06-11 12:26:33.824641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.035 [2024-06-11 12:26:33.824785] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.035 [2024-06-11 12:26:33.824794] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.035 [2024-06-11 12:26:33.824803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.035 [2024-06-11 12:26:33.826995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.035 [2024-06-11 12:26:33.836340] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.035 [2024-06-11 12:26:33.836822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.837158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.035 [2024-06-11 12:26:33.837169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.035 [2024-06-11 12:26:33.837177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.035 [2024-06-11 12:26:33.837264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.035 [2024-06-11 12:26:33.837408] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.035 [2024-06-11 12:26:33.837416] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.035 [2024-06-11 12:26:33.837423] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.035 [2024-06-11 12:26:33.839506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.849070] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.849408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.849724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.849735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.849742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.849941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.850091] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.036 [2024-06-11 12:26:33.850103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.036 [2024-06-11 12:26:33.850110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.036 [2024-06-11 12:26:33.852470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.861461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.861865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.862179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.862190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.862198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.862323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.862467] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.036 [2024-06-11 12:26:33.862476] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.036 [2024-06-11 12:26:33.862483] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.036 [2024-06-11 12:26:33.864935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.873929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.874365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.874762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.874772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.874779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.874905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.875012] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.036 [2024-06-11 12:26:33.875026] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.036 [2024-06-11 12:26:33.875033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.036 [2024-06-11 12:26:33.877281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.886471] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.886992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.887331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.887342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.887349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.887512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.887638] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.036 [2024-06-11 12:26:33.887646] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.036 [2024-06-11 12:26:33.887656] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.036 [2024-06-11 12:26:33.889922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.898885] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.899361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.899656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.899667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.899674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.899818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.899981] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.036 [2024-06-11 12:26:33.899990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.036 [2024-06-11 12:26:33.899997] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.036 [2024-06-11 12:26:33.902379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.911444] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.911903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.912233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.912245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.912252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.912397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.912541] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.036 [2024-06-11 12:26:33.912549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.036 [2024-06-11 12:26:33.912557] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.036 [2024-06-11 12:26:33.914765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.924124] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.924483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.924793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.924803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.924811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.924955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.925104] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.036 [2024-06-11 12:26:33.925114] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.036 [2024-06-11 12:26:33.925122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.036 [2024-06-11 12:26:33.927413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.036 [2024-06-11 12:26:33.936599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.036 [2024-06-11 12:26:33.937071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.937424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.036 [2024-06-11 12:26:33.937435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.036 [2024-06-11 12:26:33.937442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.036 [2024-06-11 12:26:33.937549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.036 [2024-06-11 12:26:33.937712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:33.937721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:33.937729] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:33.939790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:33.949023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:33.949500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.949836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.949847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.037 [2024-06-11 12:26:33.949854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.037 [2024-06-11 12:26:33.949997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.037 [2024-06-11 12:26:33.950147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:33.950156] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:33.950163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:33.952514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:33.961573] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:33.962146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.962525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.962538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.037 [2024-06-11 12:26:33.962548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.037 [2024-06-11 12:26:33.962692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.037 [2024-06-11 12:26:33.962896] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:33.962905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:33.962913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:33.965158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:33.974209] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:33.974633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.975033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.975048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.037 [2024-06-11 12:26:33.975058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.037 [2024-06-11 12:26:33.975221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.037 [2024-06-11 12:26:33.975406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:33.975416] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:33.975424] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:33.977791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:33.986586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:33.987126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.987502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.987515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.037 [2024-06-11 12:26:33.987525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.037 [2024-06-11 12:26:33.987706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.037 [2024-06-11 12:26:33.987836] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:33.987844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:33.987853] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:33.990023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:33.999106] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:33.999596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.999939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:33.999950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.037 [2024-06-11 12:26:33.999958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.037 [2024-06-11 12:26:34.000107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.037 [2024-06-11 12:26:34.000234] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:34.000243] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:34.000250] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:34.002569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:34.011651] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:34.012172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:34.012480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:34.012492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.037 [2024-06-11 12:26:34.012499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.037 [2024-06-11 12:26:34.012625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.037 [2024-06-11 12:26:34.012787] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:34.012795] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:34.012802] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:34.014884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:34.024171] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:34.024673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:34.025008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:34.025022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.037 [2024-06-11 12:26:34.025030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.037 [2024-06-11 12:26:34.025136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.037 [2024-06-11 12:26:34.025337] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.037 [2024-06-11 12:26:34.025346] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.037 [2024-06-11 12:26:34.025352] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.037 [2024-06-11 12:26:34.027616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.037 [2024-06-11 12:26:34.036623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.037 [2024-06-11 12:26:34.037211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.037 [2024-06-11 12:26:34.037556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.038 [2024-06-11 12:26:34.037570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.038 [2024-06-11 12:26:34.037579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.038 [2024-06-11 12:26:34.037742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.038 [2024-06-11 12:26:34.037871] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.038 [2024-06-11 12:26:34.037880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.038 [2024-06-11 12:26:34.037888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.038 [2024-06-11 12:26:34.040333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.038 [2024-06-11 12:26:34.049175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.038 [2024-06-11 12:26:34.049729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.038 [2024-06-11 12:26:34.050110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.038 [2024-06-11 12:26:34.050130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.038 [2024-06-11 12:26:34.050139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.038 [2024-06-11 12:26:34.050321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.038 [2024-06-11 12:26:34.050488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.038 [2024-06-11 12:26:34.050497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.038 [2024-06-11 12:26:34.050505] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.038 [2024-06-11 12:26:34.052651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.038 [2024-06-11 12:26:34.061579] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.038 [2024-06-11 12:26:34.062230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.038 [2024-06-11 12:26:34.062580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.038 [2024-06-11 12:26:34.062593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.038 [2024-06-11 12:26:34.062603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.038 [2024-06-11 12:26:34.062710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.038 [2024-06-11 12:26:34.062838] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.038 [2024-06-11 12:26:34.062847] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.038 [2024-06-11 12:26:34.062855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.038 [2024-06-11 12:26:34.065116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.300 [2024-06-11 12:26:34.074200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.300 [2024-06-11 12:26:34.074649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.300 [2024-06-11 12:26:34.075597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.300 [2024-06-11 12:26:34.075619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.300 [2024-06-11 12:26:34.075628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.300 [2024-06-11 12:26:34.075759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.300 [2024-06-11 12:26:34.075887] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.300 [2024-06-11 12:26:34.075896] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.300 [2024-06-11 12:26:34.075902] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.300 [2024-06-11 12:26:34.078282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.086864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.087318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.087631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.087642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.301 [2024-06-11 12:26:34.087654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.301 [2024-06-11 12:26:34.087780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.301 [2024-06-11 12:26:34.087908] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.301 [2024-06-11 12:26:34.087916] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.301 [2024-06-11 12:26:34.087923] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.301 [2024-06-11 12:26:34.090047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.099371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.099877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.100222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.100234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.301 [2024-06-11 12:26:34.100241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.301 [2024-06-11 12:26:34.100366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.301 [2024-06-11 12:26:34.100529] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.301 [2024-06-11 12:26:34.100537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.301 [2024-06-11 12:26:34.100544] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.301 [2024-06-11 12:26:34.102788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.111815] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.112269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.112595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.112606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.301 [2024-06-11 12:26:34.112613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.301 [2024-06-11 12:26:34.112775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.301 [2024-06-11 12:26:34.112937] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.301 [2024-06-11 12:26:34.112946] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.301 [2024-06-11 12:26:34.112953] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.301 [2024-06-11 12:26:34.115519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.124218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.124823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.125183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.125197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.301 [2024-06-11 12:26:34.125207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.301 [2024-06-11 12:26:34.125374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.301 [2024-06-11 12:26:34.125523] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.301 [2024-06-11 12:26:34.125532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.301 [2024-06-11 12:26:34.125541] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.301 [2024-06-11 12:26:34.127723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.136571] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.136994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.137338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.137349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.301 [2024-06-11 12:26:34.137357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.301 [2024-06-11 12:26:34.137501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.301 [2024-06-11 12:26:34.137645] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.301 [2024-06-11 12:26:34.137654] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.301 [2024-06-11 12:26:34.137661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.301 [2024-06-11 12:26:34.139758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.149039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.149569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.149805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.149815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.301 [2024-06-11 12:26:34.149823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.301 [2024-06-11 12:26:34.149930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.301 [2024-06-11 12:26:34.150081] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.301 [2024-06-11 12:26:34.150091] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.301 [2024-06-11 12:26:34.150098] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.301 [2024-06-11 12:26:34.152510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.161343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.161928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.162271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.162286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.301 [2024-06-11 12:26:34.162295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.301 [2024-06-11 12:26:34.162477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.301 [2024-06-11 12:26:34.162625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.301 [2024-06-11 12:26:34.162638] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.301 [2024-06-11 12:26:34.162646] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.301 [2024-06-11 12:26:34.164955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.301 [2024-06-11 12:26:34.173714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.301 [2024-06-11 12:26:34.174177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.174512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.301 [2024-06-11 12:26:34.174523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.174531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.174713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.174896] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.174905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.174912] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.177170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.186207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.186688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.187028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.187043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.187052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.187234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.187363] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.187372] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.187379] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.189709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.198634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.199177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.199542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.199556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.199565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.199728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.199913] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.199923] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.199934] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.202234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.211062] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.211520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.211890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.211904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.211913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.212142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.212292] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.212301] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.212309] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.214506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.223421] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.223932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.224263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.224278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.224287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.224413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.224524] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.224534] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.224541] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.226890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.235888] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.236428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.236805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.236819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.236828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.237009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.237187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.237197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.237204] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.239465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.248378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.248947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.249276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.249291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.249300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.249464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.249593] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.249602] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.249610] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.251831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.260913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.261413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.261747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.261757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.302 [2024-06-11 12:26:34.261765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.302 [2024-06-11 12:26:34.261890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.302 [2024-06-11 12:26:34.262079] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.302 [2024-06-11 12:26:34.262088] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.302 [2024-06-11 12:26:34.262095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.302 [2024-06-11 12:26:34.264552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.302 [2024-06-11 12:26:34.273377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.302 [2024-06-11 12:26:34.273971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.302 [2024-06-11 12:26:34.274297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.274312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.303 [2024-06-11 12:26:34.274322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.303 [2024-06-11 12:26:34.274541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.303 [2024-06-11 12:26:34.274670] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.303 [2024-06-11 12:26:34.274679] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.303 [2024-06-11 12:26:34.274687] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.303 [2024-06-11 12:26:34.276952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.303 [2024-06-11 12:26:34.285820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.303 [2024-06-11 12:26:34.286374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.286714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.286727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.303 [2024-06-11 12:26:34.286736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.303 [2024-06-11 12:26:34.286899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.303 [2024-06-11 12:26:34.287057] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.303 [2024-06-11 12:26:34.287068] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.303 [2024-06-11 12:26:34.287075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.303 [2024-06-11 12:26:34.289200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.303 [2024-06-11 12:26:34.298188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.303 [2024-06-11 12:26:34.298774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.299155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.299169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.303 [2024-06-11 12:26:34.299178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.303 [2024-06-11 12:26:34.299360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.303 [2024-06-11 12:26:34.299508] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.303 [2024-06-11 12:26:34.299517] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.303 [2024-06-11 12:26:34.299525] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.303 [2024-06-11 12:26:34.301836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.303 [2024-06-11 12:26:34.310635] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.303 [2024-06-11 12:26:34.311281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.311619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.311632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.303 [2024-06-11 12:26:34.311641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.303 [2024-06-11 12:26:34.311804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.303 [2024-06-11 12:26:34.311934] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.303 [2024-06-11 12:26:34.311942] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.303 [2024-06-11 12:26:34.311950] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.303 [2024-06-11 12:26:34.314228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.303 [2024-06-11 12:26:34.323109] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.303 [2024-06-11 12:26:34.323772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.324146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.303 [2024-06-11 12:26:34.324161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.303 [2024-06-11 12:26:34.324170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.303 [2024-06-11 12:26:34.324296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.303 [2024-06-11 12:26:34.324445] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.303 [2024-06-11 12:26:34.324454] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.303 [2024-06-11 12:26:34.324461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.303 [2024-06-11 12:26:34.326773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.565 [2024-06-11 12:26:34.335490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.565 [2024-06-11 12:26:34.335964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.565 [2024-06-11 12:26:34.336263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.565 [2024-06-11 12:26:34.336274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.565 [2024-06-11 12:26:34.336282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.336464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.336627] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.336635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.336643] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.566 [2024-06-11 12:26:34.338818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.566 [2024-06-11 12:26:34.348211] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.566 [2024-06-11 12:26:34.348778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.349108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.349123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.566 [2024-06-11 12:26:34.349132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.349295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.349444] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.349453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.349461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.566 [2024-06-11 12:26:34.351736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.566 [2024-06-11 12:26:34.360605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.566 [2024-06-11 12:26:34.361158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.361489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.361506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.566 [2024-06-11 12:26:34.361516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.361680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.361846] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.361855] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.361863] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.566 [2024-06-11 12:26:34.364180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.566 [2024-06-11 12:26:34.373186] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.566 [2024-06-11 12:26:34.373759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.374136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.374151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.566 [2024-06-11 12:26:34.374161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.374324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.374435] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.374444] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.374452] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.566 [2024-06-11 12:26:34.376833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.566 [2024-06-11 12:26:34.385611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.566 [2024-06-11 12:26:34.386122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.386426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.386437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.566 [2024-06-11 12:26:34.386444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.386626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.386789] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.386798] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.386806] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.566 [2024-06-11 12:26:34.389115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.566 [2024-06-11 12:26:34.398246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.566 [2024-06-11 12:26:34.398678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.398988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.398998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.566 [2024-06-11 12:26:34.399010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.399161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.399287] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.399297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.399304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.566 [2024-06-11 12:26:34.401647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.566 [2024-06-11 12:26:34.410432] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.566 [2024-06-11 12:26:34.410926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.411130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.411140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.566 [2024-06-11 12:26:34.411148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.411273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.411418] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.411427] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.411435] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.566 [2024-06-11 12:26:34.413704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.566 [2024-06-11 12:26:34.422730] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.566 [2024-06-11 12:26:34.423285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.423610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.566 [2024-06-11 12:26:34.423623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.566 [2024-06-11 12:26:34.423633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.566 [2024-06-11 12:26:34.423759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.566 [2024-06-11 12:26:34.423925] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.566 [2024-06-11 12:26:34.423934] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.566 [2024-06-11 12:26:34.423942] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.426222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.435393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.435849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.436055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.436069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.436079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.436248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.436433] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.436442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.567 [2024-06-11 12:26:34.436450] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.438895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.447996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.448363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.448675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.448686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.448694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.448838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.449001] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.449010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.567 [2024-06-11 12:26:34.449023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.451460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.460457] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.461059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.461418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.461431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.461440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.461622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.461789] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.461798] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.567 [2024-06-11 12:26:34.461806] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.463896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.472901] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.473473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.473799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.473813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.473822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.474030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.474240] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.474250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.567 [2024-06-11 12:26:34.474258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.476429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.485664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.486141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.486319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.486332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.486339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.486447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.486574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.486583] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.567 [2024-06-11 12:26:34.486590] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.488917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.497978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.498545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.498928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.498942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.498951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.499125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.499311] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.499320] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.567 [2024-06-11 12:26:34.499327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.501581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.510503] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.511102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.511477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.511490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.511499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.511682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.511793] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.511806] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.567 [2024-06-11 12:26:34.511813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.567 [2024-06-11 12:26:34.514152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.567 [2024-06-11 12:26:34.523076] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.567 [2024-06-11 12:26:34.523574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.523913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.567 [2024-06-11 12:26:34.523924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.567 [2024-06-11 12:26:34.523932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.567 [2024-06-11 12:26:34.524082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.567 [2024-06-11 12:26:34.524208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.567 [2024-06-11 12:26:34.524217] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.568 [2024-06-11 12:26:34.524224] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.568 [2024-06-11 12:26:34.526453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.568 [2024-06-11 12:26:34.535443] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.568 [2024-06-11 12:26:34.535757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.536081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.536092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.568 [2024-06-11 12:26:34.536100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.568 [2024-06-11 12:26:34.536226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.568 [2024-06-11 12:26:34.536406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.568 [2024-06-11 12:26:34.536415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.568 [2024-06-11 12:26:34.536421] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.568 [2024-06-11 12:26:34.538813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.568 [2024-06-11 12:26:34.547978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.568 [2024-06-11 12:26:34.548476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.548819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.548833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.568 [2024-06-11 12:26:34.548842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.568 [2024-06-11 12:26:34.549034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.568 [2024-06-11 12:26:34.549201] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.568 [2024-06-11 12:26:34.549210] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.568 [2024-06-11 12:26:34.549222] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.568 [2024-06-11 12:26:34.551590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.568 [2024-06-11 12:26:34.560497] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.568 [2024-06-11 12:26:34.561118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.561432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.561446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.568 [2024-06-11 12:26:34.561455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.568 [2024-06-11 12:26:34.561599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.568 [2024-06-11 12:26:34.561728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.568 [2024-06-11 12:26:34.561738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.568 [2024-06-11 12:26:34.561745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.568 [2024-06-11 12:26:34.564043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.568 [2024-06-11 12:26:34.573097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.568 [2024-06-11 12:26:34.573703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.574080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.574095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.568 [2024-06-11 12:26:34.574105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.568 [2024-06-11 12:26:34.574286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.568 [2024-06-11 12:26:34.574397] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.568 [2024-06-11 12:26:34.574407] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.568 [2024-06-11 12:26:34.574414] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.568 [2024-06-11 12:26:34.576647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.568 [2024-06-11 12:26:34.585831] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.568 [2024-06-11 12:26:34.586421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.586748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.568 [2024-06-11 12:26:34.586762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.568 [2024-06-11 12:26:34.586771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.568 [2024-06-11 12:26:34.586934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.568 [2024-06-11 12:26:34.587091] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.568 [2024-06-11 12:26:34.587100] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.568 [2024-06-11 12:26:34.587108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.568 [2024-06-11 12:26:34.589218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.568 [2024-06-11 12:26:34.598270] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.830 [2024-06-11 12:26:34.598817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.599170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.599186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.830 [2024-06-11 12:26:34.599195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.830 [2024-06-11 12:26:34.599340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.830 [2024-06-11 12:26:34.599487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.830 [2024-06-11 12:26:34.599496] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.830 [2024-06-11 12:26:34.599505] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.830 [2024-06-11 12:26:34.601838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.830 [2024-06-11 12:26:34.610596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.830 [2024-06-11 12:26:34.611117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.611487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.611500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.830 [2024-06-11 12:26:34.611510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.830 [2024-06-11 12:26:34.611692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.830 [2024-06-11 12:26:34.611858] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.830 [2024-06-11 12:26:34.611868] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.830 [2024-06-11 12:26:34.611876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.830 [2024-06-11 12:26:34.614027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.830 [2024-06-11 12:26:34.623086] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.830 [2024-06-11 12:26:34.623584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.623917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.623930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.830 [2024-06-11 12:26:34.623940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.830 [2024-06-11 12:26:34.624111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.830 [2024-06-11 12:26:34.624279] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.830 [2024-06-11 12:26:34.624288] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.830 [2024-06-11 12:26:34.624297] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.830 [2024-06-11 12:26:34.626550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.830 [2024-06-11 12:26:34.635659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.830 [2024-06-11 12:26:34.636117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.636489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.636503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.830 [2024-06-11 12:26:34.636512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.830 [2024-06-11 12:26:34.636675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.830 [2024-06-11 12:26:34.636824] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.830 [2024-06-11 12:26:34.636833] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.830 [2024-06-11 12:26:34.636840] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.830 [2024-06-11 12:26:34.639198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.830 [2024-06-11 12:26:34.648079] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.830 [2024-06-11 12:26:34.648649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.649025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.649039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.830 [2024-06-11 12:26:34.649049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.830 [2024-06-11 12:26:34.649193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.830 [2024-06-11 12:26:34.649342] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.830 [2024-06-11 12:26:34.649351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.830 [2024-06-11 12:26:34.649358] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.830 [2024-06-11 12:26:34.651556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.830 [2024-06-11 12:26:34.660454] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.830 [2024-06-11 12:26:34.660909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.661226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.661238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.830 [2024-06-11 12:26:34.661246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.830 [2024-06-11 12:26:34.661353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.830 [2024-06-11 12:26:34.661498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.830 [2024-06-11 12:26:34.661508] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.830 [2024-06-11 12:26:34.661515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.830 [2024-06-11 12:26:34.663725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.830 [2024-06-11 12:26:34.672852] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.830 [2024-06-11 12:26:34.673391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.673770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.830 [2024-06-11 12:26:34.673783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.673793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.673974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.674167] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.674177] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.674185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.676617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.685385] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.831 [2024-06-11 12:26:34.685927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.686320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.686335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.686345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.686527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.686675] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.686684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.686693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.689248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.698086] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.831 [2024-06-11 12:26:34.698673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.699055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.699070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.699079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.699261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.699447] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.699457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.699464] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.701703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.710451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.831 [2024-06-11 12:26:34.711009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.711380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.711397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.711407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.711552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.711700] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.711709] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.711717] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.713844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.722857] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.831 [2024-06-11 12:26:34.723435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.723761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.723775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.723784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.723928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.724085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.724094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.724103] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.726301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.735424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.831 [2024-06-11 12:26:34.736078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.736433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.736446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.736455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.736656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.736767] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.736776] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.736784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.738895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.747799] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.831 [2024-06-11 12:26:34.748348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.748720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.748735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.748749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.748950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.749125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.749136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.749143] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.751398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.760457] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.831 [2024-06-11 12:26:34.761034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.761383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.831 [2024-06-11 12:26:34.761396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.831 [2024-06-11 12:26:34.761406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.831 [2024-06-11 12:26:34.761607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.831 [2024-06-11 12:26:34.761755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.831 [2024-06-11 12:26:34.761764] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.831 [2024-06-11 12:26:34.761772] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.831 [2024-06-11 12:26:34.764109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.831 [2024-06-11 12:26:34.773080] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.773594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.773959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.773972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.773982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.774170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.774320] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.774329] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.774336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.832 [2024-06-11 12:26:34.776528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.832 [2024-06-11 12:26:34.785704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.786330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.786687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.786701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.786710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.786877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.787054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.787064] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.787071] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.832 [2024-06-11 12:26:34.789512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.832 [2024-06-11 12:26:34.798196] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.798774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.799005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.799026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.799036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.799162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.799348] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.799358] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.799366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.832 [2024-06-11 12:26:34.801843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.832 [2024-06-11 12:26:34.810702] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.811319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.811690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.811704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.811713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.811857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.812034] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.812044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.812051] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.832 [2024-06-11 12:26:34.814271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.832 [2024-06-11 12:26:34.823230] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.823744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.824120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.824136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.824145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.824326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.824478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.824488] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.824496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.832 [2024-06-11 12:26:34.826917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.832 [2024-06-11 12:26:34.835686] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.836281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.836600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.836614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.836624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.836768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.836879] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.836888] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.836896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.832 [2024-06-11 12:26:34.839399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.832 [2024-06-11 12:26:34.848250] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.848697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.848921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.848934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.848943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.849115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.849301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.849311] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.849318] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.832 [2024-06-11 12:26:34.851608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.832 [2024-06-11 12:26:34.860452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.832 [2024-06-11 12:26:34.861044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.861432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.832 [2024-06-11 12:26:34.861445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:21.832 [2024-06-11 12:26:34.861455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:21.832 [2024-06-11 12:26:34.861618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:21.832 [2024-06-11 12:26:34.861766] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:21.832 [2024-06-11 12:26:34.861775] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:21.832 [2024-06-11 12:26:34.861787] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.094 [2024-06-11 12:26:34.864087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.094 [2024-06-11 12:26:34.873033] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.094 [2024-06-11 12:26:34.873575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.094 [2024-06-11 12:26:34.873946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.094 [2024-06-11 12:26:34.873960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.094 [2024-06-11 12:26:34.873969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.094 [2024-06-11 12:26:34.874140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.094 [2024-06-11 12:26:34.874346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.094 [2024-06-11 12:26:34.874356] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.094 [2024-06-11 12:26:34.874364] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.094 [2024-06-11 12:26:34.876778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.094 [2024-06-11 12:26:34.885520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.094 [2024-06-11 12:26:34.886067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.094 [2024-06-11 12:26:34.886403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.094 [2024-06-11 12:26:34.886417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.094 [2024-06-11 12:26:34.886427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.094 [2024-06-11 12:26:34.886590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.094 [2024-06-11 12:26:34.886738] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.094 [2024-06-11 12:26:34.886748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.094 [2024-06-11 12:26:34.886756] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.094 [2024-06-11 12:26:34.889002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.094 [2024-06-11 12:26:34.897885] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.094 [2024-06-11 12:26:34.898430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.094 [2024-06-11 12:26:34.898806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.094 [2024-06-11 12:26:34.898820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.094 [2024-06-11 12:26:34.898829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.094 [2024-06-11 12:26:34.898955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.094 [2024-06-11 12:26:34.899075] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.094 [2024-06-11 12:26:34.899085] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.899092] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.901256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.910457] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.910922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.911115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.911128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.911136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.911318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.911482] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.911491] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.911497] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.913843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.922888] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.923504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.923877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.923891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.923900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.924072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.924184] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.924193] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.924201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.926437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.935500] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.936108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.936444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.936458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.936467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.936630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.936816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.936825] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.936833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.939170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.947860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.948449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.948818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.948832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.948841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.949004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.949162] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.949171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.949179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.951302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.960320] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.960823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.961159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.961175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.961184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.961329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.961514] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.961524] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.961531] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.963824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.972838] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.973454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.973818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.973832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.973841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.974004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.974161] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.974171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.974178] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.976439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.985367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.985903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.986242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.986257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.986267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.986411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.986578] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.986587] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.986594] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:34.988923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:34.997884] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:34.998470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.998799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:34.998812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:34.998822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:34.999031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:34.999199] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.095 [2024-06-11 12:26:34.999208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.095 [2024-06-11 12:26:34.999216] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.095 [2024-06-11 12:26:35.001415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.095 [2024-06-11 12:26:35.010280] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.095 [2024-06-11 12:26:35.010821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:35.011196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.095 [2024-06-11 12:26:35.011211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.095 [2024-06-11 12:26:35.011221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.095 [2024-06-11 12:26:35.011347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.095 [2024-06-11 12:26:35.011495] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.011504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.011511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.013659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.022723] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.023179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.023511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.023526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.023535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.023698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.023861] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.023870] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.023878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.026107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.035162] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.035628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.035973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.035983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.035991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.036178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.036322] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.036331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.036338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.038711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.047491] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.048075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.048418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.048432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.048441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.048623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.048752] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.048762] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.048770] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.051069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.059968] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.060570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.060803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.060817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.060832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.061043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.061192] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.061200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.061208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.063518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.072449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.072917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.073237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.073249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.073257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.073420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.073565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.073574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.073580] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.076013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.084792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.085268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.085617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.085631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.085640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.085803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.085914] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.085923] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.085930] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.088230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.097304] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.097876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.098194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.098210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.098219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.098443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.098537] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.098546] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.098554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.100828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.109795] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.110359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.110687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.110701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.110710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.110854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.111003] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.111012] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.111032] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.113138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.096 [2024-06-11 12:26:35.122471] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.096 [2024-06-11 12:26:35.122993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.123315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.096 [2024-06-11 12:26:35.123330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.096 [2024-06-11 12:26:35.123339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.096 [2024-06-11 12:26:35.123539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.096 [2024-06-11 12:26:35.123725] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.096 [2024-06-11 12:26:35.123734] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.096 [2024-06-11 12:26:35.123742] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.096 [2024-06-11 12:26:35.125946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.358 [2024-06-11 12:26:35.135035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.358 [2024-06-11 12:26:35.135623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.135947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.135961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.358 [2024-06-11 12:26:35.135970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.358 [2024-06-11 12:26:35.136199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.358 [2024-06-11 12:26:35.136334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.358 [2024-06-11 12:26:35.136343] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.358 [2024-06-11 12:26:35.136350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.358 [2024-06-11 12:26:35.138751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.358 [2024-06-11 12:26:35.147270] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.358 [2024-06-11 12:26:35.147853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.148076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.148092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.358 [2024-06-11 12:26:35.148102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.358 [2024-06-11 12:26:35.148246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.358 [2024-06-11 12:26:35.148395] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.358 [2024-06-11 12:26:35.148405] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.358 [2024-06-11 12:26:35.148412] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.358 [2024-06-11 12:26:35.150782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.358 [2024-06-11 12:26:35.159869] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.358 [2024-06-11 12:26:35.160473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.160797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.160811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.358 [2024-06-11 12:26:35.160820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.358 [2024-06-11 12:26:35.161030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.358 [2024-06-11 12:26:35.161198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.358 [2024-06-11 12:26:35.161207] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.358 [2024-06-11 12:26:35.161215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.358 [2024-06-11 12:26:35.163564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.358 [2024-06-11 12:26:35.172436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.358 [2024-06-11 12:26:35.173005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.173346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.173360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.358 [2024-06-11 12:26:35.173369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.358 [2024-06-11 12:26:35.173551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.358 [2024-06-11 12:26:35.173700] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.358 [2024-06-11 12:26:35.173713] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.358 [2024-06-11 12:26:35.173720] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.358 [2024-06-11 12:26:35.175993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.358 [2024-06-11 12:26:35.185068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.358 [2024-06-11 12:26:35.185520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.185814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.185825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.358 [2024-06-11 12:26:35.185832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.358 [2024-06-11 12:26:35.185976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.358 [2024-06-11 12:26:35.186107] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.358 [2024-06-11 12:26:35.186116] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.358 [2024-06-11 12:26:35.186123] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.358 [2024-06-11 12:26:35.188298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.358 [2024-06-11 12:26:35.197530] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.358 [2024-06-11 12:26:35.197924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.198240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.358 [2024-06-11 12:26:35.198251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.358 [2024-06-11 12:26:35.198259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.358 [2024-06-11 12:26:35.198403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.358 [2024-06-11 12:26:35.198567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.358 [2024-06-11 12:26:35.198575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.358 [2024-06-11 12:26:35.198582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.358 [2024-06-11 12:26:35.200740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.358 [2024-06-11 12:26:35.210269] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.358 [2024-06-11 12:26:35.210721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.211035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.211047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.211055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.211217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.211361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.211370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.211384] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.213742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.222546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.223151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.223498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.223512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.223521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.223647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.223813] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.223822] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.223829] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.226112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.235197] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.235751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.236135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.236150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.236159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.236304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.236452] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.236461] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.236469] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.238711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.247648] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.248148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.248409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.248423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.248433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.248597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.248745] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.248755] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.248762] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.251108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.260175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.260658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.260974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.260984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.260992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.261141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.261304] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.261313] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.261320] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.263548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.272723] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.273191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.273516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.273527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.273534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.273660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.273786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.273794] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.273801] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.276133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.285338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.285892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.286244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.286259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.286269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.286470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.286601] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.286611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.286619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.288929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.297775] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.298245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.298571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.298582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.298589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.298752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.359 [2024-06-11 12:26:35.298915] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.359 [2024-06-11 12:26:35.298925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.359 [2024-06-11 12:26:35.298932] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.359 [2024-06-11 12:26:35.301050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.359 [2024-06-11 12:26:35.310339] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.359 [2024-06-11 12:26:35.310815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.311042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.359 [2024-06-11 12:26:35.311052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.359 [2024-06-11 12:26:35.311060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.359 [2024-06-11 12:26:35.311241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.360 [2024-06-11 12:26:35.311369] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.360 [2024-06-11 12:26:35.311378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.360 [2024-06-11 12:26:35.311385] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.360 [2024-06-11 12:26:35.313518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.360 [2024-06-11 12:26:35.322994] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.360 [2024-06-11 12:26:35.323410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.323755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.323768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.360 [2024-06-11 12:26:35.323777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.360 [2024-06-11 12:26:35.323978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.360 [2024-06-11 12:26:35.324153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.360 [2024-06-11 12:26:35.324163] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.360 [2024-06-11 12:26:35.324170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.360 [2024-06-11 12:26:35.326386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.360 [2024-06-11 12:26:35.335529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.360 [2024-06-11 12:26:35.335991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.336331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.336342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.360 [2024-06-11 12:26:35.336350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.360 [2024-06-11 12:26:35.336457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.360 [2024-06-11 12:26:35.336601] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.360 [2024-06-11 12:26:35.336610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.360 [2024-06-11 12:26:35.336617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.360 [2024-06-11 12:26:35.338975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.360 [2024-06-11 12:26:35.347956] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.360 [2024-06-11 12:26:35.348431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.348752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.348762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.360 [2024-06-11 12:26:35.348770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.360 [2024-06-11 12:26:35.348857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.360 [2024-06-11 12:26:35.349023] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.360 [2024-06-11 12:26:35.349032] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.360 [2024-06-11 12:26:35.349039] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.360 [2024-06-11 12:26:35.351309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.360 [2024-06-11 12:26:35.360475] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.360 [2024-06-11 12:26:35.361000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.361320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.361335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.360 [2024-06-11 12:26:35.361345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.360 [2024-06-11 12:26:35.361527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.360 [2024-06-11 12:26:35.361696] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.360 [2024-06-11 12:26:35.361706] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.360 [2024-06-11 12:26:35.361714] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.360 [2024-06-11 12:26:35.364126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.360 [2024-06-11 12:26:35.372950] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.360 [2024-06-11 12:26:35.373474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.373805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.373822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.360 [2024-06-11 12:26:35.373832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.360 [2024-06-11 12:26:35.373995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.360 [2024-06-11 12:26:35.374208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.360 [2024-06-11 12:26:35.374218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.360 [2024-06-11 12:26:35.374225] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.360 [2024-06-11 12:26:35.376433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.360 [2024-06-11 12:26:35.385200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.360 [2024-06-11 12:26:35.385665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.385869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.360 [2024-06-11 12:26:35.385879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.360 [2024-06-11 12:26:35.385887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.360 [2024-06-11 12:26:35.386013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.360 [2024-06-11 12:26:35.386181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.360 [2024-06-11 12:26:35.386191] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.360 [2024-06-11 12:26:35.386198] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.360 [2024-06-11 12:26:35.388721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.620 [2024-06-11 12:26:35.397700] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.620 [2024-06-11 12:26:35.398121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.398445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.398455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.620 [2024-06-11 12:26:35.398463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.620 [2024-06-11 12:26:35.398569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.620 [2024-06-11 12:26:35.398696] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.620 [2024-06-11 12:26:35.398705] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.620 [2024-06-11 12:26:35.398712] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.620 [2024-06-11 12:26:35.401011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.620 [2024-06-11 12:26:35.410423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.620 [2024-06-11 12:26:35.410915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.411090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.411101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.620 [2024-06-11 12:26:35.411113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.620 [2024-06-11 12:26:35.411275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.620 [2024-06-11 12:26:35.411495] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.620 [2024-06-11 12:26:35.411504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.620 [2024-06-11 12:26:35.411510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.620 [2024-06-11 12:26:35.413701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.620 [2024-06-11 12:26:35.422808] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.620 [2024-06-11 12:26:35.423288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.423590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.423601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.620 [2024-06-11 12:26:35.423609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.620 [2024-06-11 12:26:35.423753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.620 [2024-06-11 12:26:35.423860] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.620 [2024-06-11 12:26:35.423869] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.620 [2024-06-11 12:26:35.423876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.620 [2024-06-11 12:26:35.426032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.620 [2024-06-11 12:26:35.435333] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.620 [2024-06-11 12:26:35.435643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.435811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.435822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.620 [2024-06-11 12:26:35.435830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.620 [2024-06-11 12:26:35.435975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.620 [2024-06-11 12:26:35.436125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.620 [2024-06-11 12:26:35.436135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.620 [2024-06-11 12:26:35.436143] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.620 [2024-06-11 12:26:35.438427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.620 [2024-06-11 12:26:35.447709] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.620 [2024-06-11 12:26:35.448219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.448578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.620 [2024-06-11 12:26:35.448592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.620 [2024-06-11 12:26:35.448601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.620 [2024-06-11 12:26:35.448788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.448917] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.448926] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.448934] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.451192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.460242] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.460776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.461095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.461110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.461120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.461246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.461412] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.461421] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.461428] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.463685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.472842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.473306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.473598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.473608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.473616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.473760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.473905] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.473914] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.473922] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.476330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.485357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.485818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.486111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.486123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.486130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.486237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.486385] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.486393] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.486401] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.488635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.497846] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.498388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.498733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.498746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.498757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.498901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.499114] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.499123] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.499131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.501296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.510447] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.510870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.511266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.511277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.511286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.511468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.511613] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.511621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.511629] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.513805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.523090] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.523588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.523922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.523932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.523939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.524088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.524215] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.524224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.524235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.526409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.535665] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.536128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.536504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.536518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.536527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.536653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.536839] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.536849] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.536857] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.539025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.548103] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.548413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.548751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.548762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.548769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.548895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.549045] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.549055] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.549061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.551201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.560558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.561131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.561523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.561538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.561547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.561710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.561859] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.561867] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.561879] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.564082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.572834] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.573342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.573684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.573695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.573703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.573772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.573897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.573906] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.573912] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.576179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.585388] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.585837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.586177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.586188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.586195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.586339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.586502] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.586510] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.586517] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.588841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.597921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.598499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.598872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.598886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.598895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.599084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.599195] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.599205] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.599212] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.601688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.610600] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.611242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.611618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.611632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.611641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.611805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.611990] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.611999] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.612006] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.614194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.623116] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.623584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.623837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.623851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.623861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.624006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.624182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.624192] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.624200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.626363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.635517] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.636124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.636470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.636483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.636493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.636712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.636879] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.636889] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.636896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.639139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.621 [2024-06-11 12:26:35.647914] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.621 [2024-06-11 12:26:35.648412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.648601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.621 [2024-06-11 12:26:35.648613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.621 [2024-06-11 12:26:35.648620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.621 [2024-06-11 12:26:35.648746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.621 [2024-06-11 12:26:35.648909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.621 [2024-06-11 12:26:35.648919] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.621 [2024-06-11 12:26:35.648926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.621 [2024-06-11 12:26:35.651239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.882 [2024-06-11 12:26:35.660502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.882 [2024-06-11 12:26:35.660904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.882 [2024-06-11 12:26:35.661274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.882 [2024-06-11 12:26:35.661285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.882 [2024-06-11 12:26:35.661293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.882 [2024-06-11 12:26:35.661418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.882 [2024-06-11 12:26:35.661544] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.882 [2024-06-11 12:26:35.661553] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.882 [2024-06-11 12:26:35.661560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.882 [2024-06-11 12:26:35.663845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.882 [2024-06-11 12:26:35.673248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.882 [2024-06-11 12:26:35.673737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.882 [2024-06-11 12:26:35.674023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.882 [2024-06-11 12:26:35.674037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.882 [2024-06-11 12:26:35.674047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.882 [2024-06-11 12:26:35.674210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.882 [2024-06-11 12:26:35.674359] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.882 [2024-06-11 12:26:35.674368] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.882 [2024-06-11 12:26:35.674375] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.882 [2024-06-11 12:26:35.676620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.685706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.686216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.686566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.686577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.686584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.686766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.686911] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.686920] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.686927] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.689440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.698137] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.698604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.698784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.698797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.698806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.698970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.699066] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.699079] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.699086] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.701372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.710731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.711159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.712111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.712137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.712147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.712292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.712459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.712468] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.712476] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.714734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.723392] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.723856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.724162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.724177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.724190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.724428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.724614] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.724623] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.724631] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.727095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.735922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.736362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.736657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.736668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.736676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.736839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.737003] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.737012] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.737024] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.739383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.748601] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.749093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.749402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.749414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.749421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.749547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.749674] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.749684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.749691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.752069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.761305] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.761777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.762052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.762063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.762071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.762220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.762420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.762428] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.762435] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.764869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.773753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.774310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.774698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.774711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.774721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.774903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.775077] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.883 [2024-06-11 12:26:35.775086] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.883 [2024-06-11 12:26:35.775094] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.883 [2024-06-11 12:26:35.777395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.883 [2024-06-11 12:26:35.786121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.883 [2024-06-11 12:26:35.786574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.786905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.883 [2024-06-11 12:26:35.786916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.883 [2024-06-11 12:26:35.786924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.883 [2024-06-11 12:26:35.787092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.883 [2024-06-11 12:26:35.787219] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.787228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.787235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.789723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.798574] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.799063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.799403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.799413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.799421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.799602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.799751] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.799760] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.799768] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.802013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.810952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.811438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.811779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.811789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.811797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.811940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.812091] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.812100] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.812108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.814244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.823358] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.823882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.824094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.824105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.824112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.824275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.824401] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.824410] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.824417] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.826683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.835977] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.836548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.836889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.836902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.836911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.837120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.837250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.837263] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.837271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.839396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.848324] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.848885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.849354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.849369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.849379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.849542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.849728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.849737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.849744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.851987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.860884] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.861331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.861637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.861651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.861660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.861823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.862009] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.862024] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.862033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.864475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.873439] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.873780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.874131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.874142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.874150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.874257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.874402] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.874411] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.874422] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.876835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.884 [2024-06-11 12:26:35.886072] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.884 [2024-06-11 12:26:35.886423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.886618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.884 [2024-06-11 12:26:35.886629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.884 [2024-06-11 12:26:35.886637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.884 [2024-06-11 12:26:35.886781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.884 [2024-06-11 12:26:35.886926] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.884 [2024-06-11 12:26:35.886936] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.884 [2024-06-11 12:26:35.886942] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.884 [2024-06-11 12:26:35.889323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.885 [2024-06-11 12:26:35.898637] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.885 [2024-06-11 12:26:35.899137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.885 [2024-06-11 12:26:35.899483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.885 [2024-06-11 12:26:35.899494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.885 [2024-06-11 12:26:35.899501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.885 [2024-06-11 12:26:35.899589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.885 [2024-06-11 12:26:35.899752] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.885 [2024-06-11 12:26:35.899761] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.885 [2024-06-11 12:26:35.899767] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.885 [2024-06-11 12:26:35.902094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.885 [2024-06-11 12:26:35.911093] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:22.885 [2024-06-11 12:26:35.911557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.885 [2024-06-11 12:26:35.911887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.885 [2024-06-11 12:26:35.911898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:22.885 [2024-06-11 12:26:35.911905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:22.885 [2024-06-11 12:26:35.912056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:22.885 [2024-06-11 12:26:35.912183] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:22.885 [2024-06-11 12:26:35.912192] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:22.885 [2024-06-11 12:26:35.912200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:22.885 [2024-06-11 12:26:35.914530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.146 [2024-06-11 12:26:35.923486] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.146 [2024-06-11 12:26:35.923958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.146 [2024-06-11 12:26:35.924140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.146 [2024-06-11 12:26:35.924151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.146 [2024-06-11 12:26:35.924159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.146 [2024-06-11 12:26:35.924284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.146 [2024-06-11 12:26:35.924410] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.146 [2024-06-11 12:26:35.924420] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.146 [2024-06-11 12:26:35.924427] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.146 [2024-06-11 12:26:35.926828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.146 [2024-06-11 12:26:35.936231] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.146 [2024-06-11 12:26:35.936807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.146 [2024-06-11 12:26:35.937184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.146 [2024-06-11 12:26:35.937200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.146 [2024-06-11 12:26:35.937209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.146 [2024-06-11 12:26:35.937428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.146 [2024-06-11 12:26:35.937539] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.146 [2024-06-11 12:26:35.937548] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.146 [2024-06-11 12:26:35.937556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.146 [2024-06-11 12:26:35.939961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.146 [2024-06-11 12:26:35.948477] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.146 [2024-06-11 12:26:35.949046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.146 [2024-06-11 12:26:35.949394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.146 [2024-06-11 12:26:35.949408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.146 [2024-06-11 12:26:35.949417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.146 [2024-06-11 12:26:35.949617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.146 [2024-06-11 12:26:35.949802] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.146 [2024-06-11 12:26:35.949813] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:35.949820] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:35.952027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:35.960958] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:35.961569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.961897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.961911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:35.961920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:35.962092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:35.962241] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:35.962250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:35.962257] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:35.964586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:35.973493] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:35.974093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.974437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.974451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:35.974460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:35.974604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:35.974715] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:35.974724] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:35.974731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:35.977079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:35.986126] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:35.986633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.986967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.986981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:35.986990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:35.987180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:35.987348] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:35.987358] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:35.987365] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:35.989712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:35.998775] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:35.999304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.999637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:35.999651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:35.999660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:35.999824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:35.999991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:36.000000] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:36.000008] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:36.001988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:36.011229] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:36.011822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.012168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.012184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:36.012194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:36.012375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:36.012525] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:36.012534] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:36.012541] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:36.014466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:36.023623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:36.024213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.024579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.024593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:36.024602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:36.024766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:36.024933] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:36.024942] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:36.024950] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:36.027191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:36.036130] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:36.036502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.036796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.036811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:36.036819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:36.037000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:36.037169] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:36.037178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:36.037185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:36.039451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:36.048624] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:36.049064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.049394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.147 [2024-06-11 12:26:36.049405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.147 [2024-06-11 12:26:36.049413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.147 [2024-06-11 12:26:36.049542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.147 [2024-06-11 12:26:36.049669] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.147 [2024-06-11 12:26:36.049677] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.147 [2024-06-11 12:26:36.049684] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.147 [2024-06-11 12:26:36.051860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.147 [2024-06-11 12:26:36.061040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.147 [2024-06-11 12:26:36.061662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.062031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.062046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.062055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.062236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.062347] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.062357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.062364] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.064545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.073755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.074335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.074687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.074701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.074714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.074877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.075035] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.075045] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.075053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.077318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.086296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.086825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.087165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.087181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.087190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.087334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.087483] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.087491] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.087499] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.089572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.098705] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.099289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.099615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.099629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.099638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.099782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.099949] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.099959] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.099966] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.102081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.111320] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.111818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.112243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.112281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.112293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.112480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.112647] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.112656] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.112664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.114903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.124110] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.124765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.125089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.125104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.125113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.125277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.125407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.125415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.125423] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.127809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.136686] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.137141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.137476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.137489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.137499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.137662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.137810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.137819] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.137826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.140012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.149434] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.149961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.150336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.150351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.150360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.150560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.150714] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.148 [2024-06-11 12:26:36.150723] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.148 [2024-06-11 12:26:36.150730] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.148 [2024-06-11 12:26:36.152986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.148 [2024-06-11 12:26:36.161937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.148 [2024-06-11 12:26:36.162552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.162900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.148 [2024-06-11 12:26:36.162914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.148 [2024-06-11 12:26:36.162923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.148 [2024-06-11 12:26:36.163132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.148 [2024-06-11 12:26:36.163243] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.149 [2024-06-11 12:26:36.163252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.149 [2024-06-11 12:26:36.163259] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.149 [2024-06-11 12:26:36.165292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.149 [2024-06-11 12:26:36.174507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.149 [2024-06-11 12:26:36.175071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.149 [2024-06-11 12:26:36.175452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.149 [2024-06-11 12:26:36.175466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.149 [2024-06-11 12:26:36.175475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.149 [2024-06-11 12:26:36.175639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.149 [2024-06-11 12:26:36.175768] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.149 [2024-06-11 12:26:36.175777] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.149 [2024-06-11 12:26:36.175784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.149 [2024-06-11 12:26:36.177943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.187038] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.187612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.187982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.187995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.188004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.188192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.188341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.188355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.188362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.190729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.199418] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.199979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.200311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.200326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.200335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.200536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.200703] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.200711] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.200719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.203014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.212019] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.212477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.212781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.212792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.212800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.212907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.212996] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.213003] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.213010] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.215433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.224513] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.225073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.225407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.225421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.225430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.225575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.225760] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.225769] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.225781] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.227970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.236930] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.237420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.237734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.237744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.237752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.237896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.238047] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.238057] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.238063] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.240223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.249529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.249977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.250282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.250294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.250301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.250446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.250609] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.250618] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.250625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.252870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.261808] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.262234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.262530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.262541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.262550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.262676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.262803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.262812] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.262819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.265050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.274253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.274606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.274887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.274898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.411 [2024-06-11 12:26:36.274905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.411 [2024-06-11 12:26:36.275035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.411 [2024-06-11 12:26:36.275180] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.411 [2024-06-11 12:26:36.275188] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.411 [2024-06-11 12:26:36.275195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.411 [2024-06-11 12:26:36.277637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.411 [2024-06-11 12:26:36.286631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.411 [2024-06-11 12:26:36.287255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.411 [2024-06-11 12:26:36.287577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.287591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.287600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.287764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.287968] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.287977] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.287984] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.290281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.299251] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.299826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.300171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.300186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.300196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.300358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.300506] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.300515] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.300522] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.302816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.311825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.312211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.312564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.312577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.312587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.312731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.312898] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.312906] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.312914] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.315178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.324210] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.324747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.325078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.325092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.325102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.325284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.325451] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.325460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.325468] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.327761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.336505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.336880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.337043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.337056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.337063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.337227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.337372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.337382] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.337389] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.339753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.348981] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.349502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.349740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.349753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.349763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.349944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.350140] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.350150] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.350157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.352447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.361619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.362193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.362565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.362578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.362588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.362714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.362899] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.362908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.362916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.365177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.374223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.374824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.375085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.375099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.375108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.375291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.412 [2024-06-11 12:26:36.375422] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.412 [2024-06-11 12:26:36.375432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.412 [2024-06-11 12:26:36.375440] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.412 [2024-06-11 12:26:36.377648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.412 [2024-06-11 12:26:36.386620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.412 [2024-06-11 12:26:36.387144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.387480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.412 [2024-06-11 12:26:36.387491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.412 [2024-06-11 12:26:36.387503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.412 [2024-06-11 12:26:36.387609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.413 [2024-06-11 12:26:36.387754] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.413 [2024-06-11 12:26:36.387763] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.413 [2024-06-11 12:26:36.387769] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.413 [2024-06-11 12:26:36.390190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.413 [2024-06-11 12:26:36.398930] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.413 [2024-06-11 12:26:36.399528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.399940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.399953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.413 [2024-06-11 12:26:36.399963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.413 [2024-06-11 12:26:36.400098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.413 [2024-06-11 12:26:36.400228] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.413 [2024-06-11 12:26:36.400237] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.413 [2024-06-11 12:26:36.400245] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.413 [2024-06-11 12:26:36.402352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.413 [2024-06-11 12:26:36.411374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.413 [2024-06-11 12:26:36.411870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.412096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.412109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.413 [2024-06-11 12:26:36.412116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.413 [2024-06-11 12:26:36.412262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.413 [2024-06-11 12:26:36.412407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.413 [2024-06-11 12:26:36.412415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.413 [2024-06-11 12:26:36.412422] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.413 [2024-06-11 12:26:36.414597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.413 [2024-06-11 12:26:36.423980] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.413 [2024-06-11 12:26:36.424514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.424851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.424862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.413 [2024-06-11 12:26:36.424869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.413 [2024-06-11 12:26:36.425043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.413 [2024-06-11 12:26:36.425207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.413 [2024-06-11 12:26:36.425216] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.413 [2024-06-11 12:26:36.425222] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.413 [2024-06-11 12:26:36.427524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.413 [2024-06-11 12:26:36.436751] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.413 [2024-06-11 12:26:36.437338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.437666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.413 [2024-06-11 12:26:36.437680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.413 [2024-06-11 12:26:36.437689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.413 [2024-06-11 12:26:36.437833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.413 [2024-06-11 12:26:36.437981] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.413 [2024-06-11 12:26:36.437990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.413 [2024-06-11 12:26:36.437997] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.413 [2024-06-11 12:26:36.440169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.675 [2024-06-11 12:26:36.449464] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.675 [2024-06-11 12:26:36.449960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.675 [2024-06-11 12:26:36.450342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.675 [2024-06-11 12:26:36.450356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.675 [2024-06-11 12:26:36.450365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.675 [2024-06-11 12:26:36.450529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.675 [2024-06-11 12:26:36.450658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.675 [2024-06-11 12:26:36.450667] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.675 [2024-06-11 12:26:36.450675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.675 [2024-06-11 12:26:36.453007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.675 [2024-06-11 12:26:36.462110] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.675 [2024-06-11 12:26:36.462708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.675 [2024-06-11 12:26:36.463049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.675 [2024-06-11 12:26:36.463064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.675 [2024-06-11 12:26:36.463073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.675 [2024-06-11 12:26:36.463274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.675 [2024-06-11 12:26:36.463445] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.675 [2024-06-11 12:26:36.463456] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.675 [2024-06-11 12:26:36.463464] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.675 [2024-06-11 12:26:36.465629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.675 [2024-06-11 12:26:36.474665] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.675 [2024-06-11 12:26:36.475275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.675 [2024-06-11 12:26:36.475655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.675 [2024-06-11 12:26:36.475668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.675 [2024-06-11 12:26:36.475677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.675 [2024-06-11 12:26:36.475822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.675 [2024-06-11 12:26:36.475950] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.475959] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.475967] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.676 [2024-06-11 12:26:36.478347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.676 [2024-06-11 12:26:36.487081] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.676 [2024-06-11 12:26:36.487680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.488026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.488041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.676 [2024-06-11 12:26:36.488051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.676 [2024-06-11 12:26:36.488215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.676 [2024-06-11 12:26:36.488307] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.488316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.488324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.676 [2024-06-11 12:26:36.490579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.676 [2024-06-11 12:26:36.499731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.676 [2024-06-11 12:26:36.500254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.500625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.500639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.676 [2024-06-11 12:26:36.500648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.676 [2024-06-11 12:26:36.500867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.676 [2024-06-11 12:26:36.501043] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.501057] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.501065] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.676 [2024-06-11 12:26:36.503375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.676 [2024-06-11 12:26:36.512322] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.676 [2024-06-11 12:26:36.512930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.513248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.513263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.676 [2024-06-11 12:26:36.513273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.676 [2024-06-11 12:26:36.513398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.676 [2024-06-11 12:26:36.513546] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.513555] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.513563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.676 [2024-06-11 12:26:36.515786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.676 [2024-06-11 12:26:36.524814] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.676 [2024-06-11 12:26:36.525268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.525583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.525594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.676 [2024-06-11 12:26:36.525601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.676 [2024-06-11 12:26:36.525727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.676 [2024-06-11 12:26:36.525872] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.525880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.525888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.676 [2024-06-11 12:26:36.528196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.676 [2024-06-11 12:26:36.537584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.676 [2024-06-11 12:26:36.538123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.538497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.538510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.676 [2024-06-11 12:26:36.538519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.676 [2024-06-11 12:26:36.538645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.676 [2024-06-11 12:26:36.538812] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.538821] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.538833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.676 [2024-06-11 12:26:36.541040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.676 [2024-06-11 12:26:36.550134] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.676 [2024-06-11 12:26:36.550738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.551113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.551128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.676 [2024-06-11 12:26:36.551138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.676 [2024-06-11 12:26:36.551283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.676 [2024-06-11 12:26:36.551430] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.551440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.551448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.676 [2024-06-11 12:26:36.553668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.676 [2024-06-11 12:26:36.562533] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.676 [2024-06-11 12:26:36.563096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.563432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.676 [2024-06-11 12:26:36.563445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.676 [2024-06-11 12:26:36.563455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.676 [2024-06-11 12:26:36.563580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.676 [2024-06-11 12:26:36.563729] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.676 [2024-06-11 12:26:36.563738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.676 [2024-06-11 12:26:36.563745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.566060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.575166] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.575739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.576115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.576130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.677 [2024-06-11 12:26:36.576139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.677 [2024-06-11 12:26:36.576340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.677 [2024-06-11 12:26:36.576470] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.677 [2024-06-11 12:26:36.576479] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.677 [2024-06-11 12:26:36.576487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.578556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.587634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.588101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.588386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.588397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.677 [2024-06-11 12:26:36.588405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.677 [2024-06-11 12:26:36.588531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.677 [2024-06-11 12:26:36.588675] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.677 [2024-06-11 12:26:36.588684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.677 [2024-06-11 12:26:36.588691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.591054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.600607] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.601063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.601391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.601402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.677 [2024-06-11 12:26:36.601409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.677 [2024-06-11 12:26:36.601558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.677 [2024-06-11 12:26:36.601702] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.677 [2024-06-11 12:26:36.601711] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.677 [2024-06-11 12:26:36.601718] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.603990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.613075] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.613658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.614031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.614046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.677 [2024-06-11 12:26:36.614056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.677 [2024-06-11 12:26:36.614200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.677 [2024-06-11 12:26:36.614311] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.677 [2024-06-11 12:26:36.614319] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.677 [2024-06-11 12:26:36.614326] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.616565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.625641] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.626159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.626508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.626521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.677 [2024-06-11 12:26:36.626531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.677 [2024-06-11 12:26:36.626694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.677 [2024-06-11 12:26:36.626861] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.677 [2024-06-11 12:26:36.626870] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.677 [2024-06-11 12:26:36.626878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.629063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.638198] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.638760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.639133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.639147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.677 [2024-06-11 12:26:36.639157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.677 [2024-06-11 12:26:36.639301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.677 [2024-06-11 12:26:36.639487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.677 [2024-06-11 12:26:36.639496] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.677 [2024-06-11 12:26:36.639504] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.641759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.650675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.651283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.651654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.651667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.677 [2024-06-11 12:26:36.651677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.677 [2024-06-11 12:26:36.651822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.677 [2024-06-11 12:26:36.651970] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.677 [2024-06-11 12:26:36.651980] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.677 [2024-06-11 12:26:36.651987] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.677 [2024-06-11 12:26:36.654175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.677 [2024-06-11 12:26:36.663263] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.677 [2024-06-11 12:26:36.663869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.664101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.677 [2024-06-11 12:26:36.664116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.678 [2024-06-11 12:26:36.664125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.678 [2024-06-11 12:26:36.664270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.678 [2024-06-11 12:26:36.664455] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.678 [2024-06-11 12:26:36.664465] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.678 [2024-06-11 12:26:36.664473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.678 [2024-06-11 12:26:36.666580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.678 [2024-06-11 12:26:36.675817] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.678 [2024-06-11 12:26:36.676272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.678 [2024-06-11 12:26:36.676610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.678 [2024-06-11 12:26:36.676622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.678 [2024-06-11 12:26:36.676630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.678 [2024-06-11 12:26:36.676811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.678 [2024-06-11 12:26:36.676955] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.678 [2024-06-11 12:26:36.676964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.678 [2024-06-11 12:26:36.676971] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.678 [2024-06-11 12:26:36.679180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.678 [2024-06-11 12:26:36.688543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.678 [2024-06-11 12:26:36.688882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.678 [2024-06-11 12:26:36.689208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.678 [2024-06-11 12:26:36.689219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.678 [2024-06-11 12:26:36.689226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.678 [2024-06-11 12:26:36.689409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.678 [2024-06-11 12:26:36.689555] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.678 [2024-06-11 12:26:36.689564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.678 [2024-06-11 12:26:36.689571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.678 [2024-06-11 12:26:36.691914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.678 [2024-06-11 12:26:36.701115] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.678 [2024-06-11 12:26:36.701588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.678 [2024-06-11 12:26:36.701918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.678 [2024-06-11 12:26:36.701932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.678 [2024-06-11 12:26:36.701940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.678 [2024-06-11 12:26:36.702071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.678 [2024-06-11 12:26:36.702216] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.678 [2024-06-11 12:26:36.702224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.678 [2024-06-11 12:26:36.702231] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.678 [2024-06-11 12:26:36.704423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.940 [2024-06-11 12:26:36.713792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.940 [2024-06-11 12:26:36.714314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.940 [2024-06-11 12:26:36.714638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.940 [2024-06-11 12:26:36.714649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.940 [2024-06-11 12:26:36.714656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.940 [2024-06-11 12:26:36.714800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.940 [2024-06-11 12:26:36.714944] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.940 [2024-06-11 12:26:36.714952] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.940 [2024-06-11 12:26:36.714959] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.940 [2024-06-11 12:26:36.717177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.940 [2024-06-11 12:26:36.726137] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.940 [2024-06-11 12:26:36.726576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.940 [2024-06-11 12:26:36.726765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.940 [2024-06-11 12:26:36.726775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.940 [2024-06-11 12:26:36.726783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.940 [2024-06-11 12:26:36.726927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.940 [2024-06-11 12:26:36.727078] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.940 [2024-06-11 12:26:36.727089] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.940 [2024-06-11 12:26:36.727096] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.940 [2024-06-11 12:26:36.729379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1697078 Killed "${NVMF_APP[@]}" "$@" 00:32:23.941 12:26:36 -- host/bdevperf.sh@36 -- # tgt_init 00:32:23.941 12:26:36 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:23.941 12:26:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:23.941 12:26:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:23.941 12:26:36 -- common/autotest_common.sh@10 -- # set +x 00:32:23.941 [2024-06-11 12:26:36.738580] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.941 [2024-06-11 12:26:36.739146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.739413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.739426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.941 [2024-06-11 12:26:36.739436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.941 [2024-06-11 12:26:36.739582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.941 [2024-06-11 12:26:36.739749] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.941 [2024-06-11 12:26:36.739758] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.941 [2024-06-11 12:26:36.739766] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.941 12:26:36 -- nvmf/common.sh@469 -- # nvmfpid=1698706 00:32:23.941 12:26:36 -- nvmf/common.sh@470 -- # waitforlisten 1698706 00:32:23.941 12:26:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:23.941 12:26:36 -- common/autotest_common.sh@819 -- # '[' -z 1698706 ']' 00:32:23.941 [2024-06-11 12:26:36.742029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.941 12:26:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.941 12:26:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:23.941 12:26:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.941 12:26:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:23.941 12:26:36 -- common/autotest_common.sh@10 -- # set +x 00:32:23.941 [2024-06-11 12:26:36.751168] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.941 [2024-06-11 12:26:36.751728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.751989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.752003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.941 [2024-06-11 12:26:36.752013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.941 [2024-06-11 12:26:36.752183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.941 [2024-06-11 12:26:36.752332] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.941 [2024-06-11 12:26:36.752341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.941 [2024-06-11 12:26:36.752349] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.941 [2024-06-11 12:26:36.754546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.941 [2024-06-11 12:26:36.763551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.941 [2024-06-11 12:26:36.763939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.764147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.764160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.941 [2024-06-11 12:26:36.764169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.941 [2024-06-11 12:26:36.764332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.941 [2024-06-11 12:26:36.764481] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.941 [2024-06-11 12:26:36.764489] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.941 [2024-06-11 12:26:36.764496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.941 [2024-06-11 12:26:36.766966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.941 [2024-06-11 12:26:36.775903] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.941 [2024-06-11 12:26:36.776414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.776713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.776723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.941 [2024-06-11 12:26:36.776730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.941 [2024-06-11 12:26:36.776894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.941 [2024-06-11 12:26:36.777033] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.941 [2024-06-11 12:26:36.777041] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.941 [2024-06-11 12:26:36.777048] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.941 [2024-06-11 12:26:36.779335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.941 [2024-06-11 12:26:36.788423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.941 [2024-06-11 12:26:36.788989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.789237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.789252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.941 [2024-06-11 12:26:36.789261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.941 [2024-06-11 12:26:36.789388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.941 [2024-06-11 12:26:36.789554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.941 [2024-06-11 12:26:36.789563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.941 [2024-06-11 12:26:36.789571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.941 [2024-06-11 12:26:36.792053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.941 [2024-06-11 12:26:36.793173] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:23.941 [2024-06-11 12:26:36.793220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.941 [2024-06-11 12:26:36.800838] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.941 [2024-06-11 12:26:36.801225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.801560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.941 [2024-06-11 12:26:36.801570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.941 [2024-06-11 12:26:36.801583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.941 [2024-06-11 12:26:36.801747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.941 [2024-06-11 12:26:36.801929] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.941 [2024-06-11 12:26:36.801937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.941 [2024-06-11 12:26:36.801944] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.941 [2024-06-11 12:26:36.804104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.941 [2024-06-11 12:26:36.813498] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.941 [2024-06-11 12:26:36.813942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.814264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.814274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.942 [2024-06-11 12:26:36.814282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.942 [2024-06-11 12:26:36.814389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.942 [2024-06-11 12:26:36.814496] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.942 [2024-06-11 12:26:36.814504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.942 [2024-06-11 12:26:36.814510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.942 [2024-06-11 12:26:36.816572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.942 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.942 [2024-06-11 12:26:36.825958] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.942 [2024-06-11 12:26:36.826444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.826765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.826778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.942 [2024-06-11 12:26:36.826787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.942 [2024-06-11 12:26:36.826969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.942 [2024-06-11 12:26:36.827123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.942 [2024-06-11 12:26:36.827132] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.942 [2024-06-11 12:26:36.827139] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.942 [2024-06-11 12:26:36.829194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.942 [2024-06-11 12:26:36.838596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.942 [2024-06-11 12:26:36.839115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.839461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.839471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.942 [2024-06-11 12:26:36.839479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.942 [2024-06-11 12:26:36.839650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.942 [2024-06-11 12:26:36.839795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.942 [2024-06-11 12:26:36.839803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.942 [2024-06-11 12:26:36.839809] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.942 [2024-06-11 12:26:36.841891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.942 [2024-06-11 12:26:36.851061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.942 [2024-06-11 12:26:36.851643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.852005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.852025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.942 [2024-06-11 12:26:36.852035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.942 [2024-06-11 12:26:36.852218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.942 [2024-06-11 12:26:36.852422] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.942 [2024-06-11 12:26:36.852430] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.942 [2024-06-11 12:26:36.852437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.942 [2024-06-11 12:26:36.854656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.942 [2024-06-11 12:26:36.863387] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.942 [2024-06-11 12:26:36.863942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.864178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.864192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.942 [2024-06-11 12:26:36.864201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.942 [2024-06-11 12:26:36.864328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.942 [2024-06-11 12:26:36.864475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.942 [2024-06-11 12:26:36.864484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.942 [2024-06-11 12:26:36.864491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.942 [2024-06-11 12:26:36.866691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.942 [2024-06-11 12:26:36.873108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:23.942 [2024-06-11 12:26:36.876009] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.942 [2024-06-11 12:26:36.876559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.876921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.876935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.942 [2024-06-11 12:26:36.876944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.942 [2024-06-11 12:26:36.877109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.942 [2024-06-11 12:26:36.877296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.942 [2024-06-11 12:26:36.877304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.942 [2024-06-11 12:26:36.877311] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.942 [2024-06-11 12:26:36.879528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.942 [2024-06-11 12:26:36.888588] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.942 [2024-06-11 12:26:36.889118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.889477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.889490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.942 [2024-06-11 12:26:36.889500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.942 [2024-06-11 12:26:36.889665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.942 [2024-06-11 12:26:36.889795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.942 [2024-06-11 12:26:36.889803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.942 [2024-06-11 12:26:36.889810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.942 [2024-06-11 12:26:36.892055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.942 [2024-06-11 12:26:36.899828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:23.942 [2024-06-11 12:26:36.899917] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.942 [2024-06-11 12:26:36.899923] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.942 [2024-06-11 12:26:36.899928] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.942 [2024-06-11 12:26:36.900065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:23.942 [2024-06-11 12:26:36.900313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.942 [2024-06-11 12:26:36.900312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:23.942 [2024-06-11 12:26:36.901045] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.942 [2024-06-11 12:26:36.901504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.901738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.942 [2024-06-11 12:26:36.901751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.943 [2024-06-11 12:26:36.901760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.943 [2024-06-11 12:26:36.901868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.943 [2024-06-11 12:26:36.902016] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.943 [2024-06-11 12:26:36.902031] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.943 [2024-06-11 12:26:36.902038] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.943 [2024-06-11 12:26:36.904315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.943 [2024-06-11 12:26:36.913628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.943 [2024-06-11 12:26:36.914117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.914476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.914486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.943 [2024-06-11 12:26:36.914494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.943 [2024-06-11 12:26:36.914621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.943 [2024-06-11 12:26:36.914709] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.943 [2024-06-11 12:26:36.914717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.943 [2024-06-11 12:26:36.914724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.943 [2024-06-11 12:26:36.917128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.943 [2024-06-11 12:26:36.926047] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.943 [2024-06-11 12:26:36.926556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.926838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.926848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.943 [2024-06-11 12:26:36.926855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.943 [2024-06-11 12:26:36.927026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.943 [2024-06-11 12:26:36.927191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.943 [2024-06-11 12:26:36.927198] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.943 [2024-06-11 12:26:36.927205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.943 [2024-06-11 12:26:36.929475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.943 [2024-06-11 12:26:36.938543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.943 [2024-06-11 12:26:36.939066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.939380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.939389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.943 [2024-06-11 12:26:36.939397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.943 [2024-06-11 12:26:36.939552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.943 [2024-06-11 12:26:36.939678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.943 [2024-06-11 12:26:36.939686] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.943 [2024-06-11 12:26:36.939693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.943 [2024-06-11 12:26:36.941908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.943 [2024-06-11 12:26:36.951001] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.943 [2024-06-11 12:26:36.951267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.951460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.951470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.943 [2024-06-11 12:26:36.951477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.943 [2024-06-11 12:26:36.951622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.943 [2024-06-11 12:26:36.951767] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.943 [2024-06-11 12:26:36.951775] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.943 [2024-06-11 12:26:36.951782] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.943 [2024-06-11 12:26:36.953880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:23.943 [2024-06-11 12:26:36.963558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.943 [2024-06-11 12:26:36.964004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.964365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.943 [2024-06-11 12:26:36.964375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:23.943 [2024-06-11 12:26:36.964383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:23.943 [2024-06-11 12:26:36.964565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:23.943 [2024-06-11 12:26:36.964691] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:23.943 [2024-06-11 12:26:36.964699] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:23.943 [2024-06-11 12:26:36.964706] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.943 [2024-06-11 12:26:36.966988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:36.976152] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:36.976647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:36.976981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:36.976994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:36.977004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.205 [2024-06-11 12:26:36.977223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.205 [2024-06-11 12:26:36.977372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.205 [2024-06-11 12:26:36.977380] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.205 [2024-06-11 12:26:36.977388] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.205 [2024-06-11 12:26:36.979603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:36.988841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:36.989416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:36.989654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:36.989671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:36.989680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.205 [2024-06-11 12:26:36.989825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.205 [2024-06-11 12:26:36.990010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.205 [2024-06-11 12:26:36.990027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.205 [2024-06-11 12:26:36.990034] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.205 [2024-06-11 12:26:36.992271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:37.001327] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:37.001887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.002048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.002065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:37.002074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.205 [2024-06-11 12:26:37.002256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.205 [2024-06-11 12:26:37.002366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.205 [2024-06-11 12:26:37.002374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.205 [2024-06-11 12:26:37.002381] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.205 [2024-06-11 12:26:37.004572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:37.013700] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:37.014156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.014573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.014583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:37.014590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.205 [2024-06-11 12:26:37.014716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.205 [2024-06-11 12:26:37.014897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.205 [2024-06-11 12:26:37.014905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.205 [2024-06-11 12:26:37.014912] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.205 [2024-06-11 12:26:37.017292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:37.026113] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:37.026590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.026915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.026925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:37.026936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.205 [2024-06-11 12:26:37.027122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.205 [2024-06-11 12:26:37.027285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.205 [2024-06-11 12:26:37.027293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.205 [2024-06-11 12:26:37.027299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.205 [2024-06-11 12:26:37.029545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:37.038701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:37.039167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.039375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.039385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:37.039393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.205 [2024-06-11 12:26:37.039537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.205 [2024-06-11 12:26:37.039643] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.205 [2024-06-11 12:26:37.039651] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.205 [2024-06-11 12:26:37.039657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.205 [2024-06-11 12:26:37.041884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:37.051106] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:37.051600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.051897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.051906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:37.051913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.205 [2024-06-11 12:26:37.052062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.205 [2024-06-11 12:26:37.052225] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.205 [2024-06-11 12:26:37.052232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.205 [2024-06-11 12:26:37.052239] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.205 [2024-06-11 12:26:37.054613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.205 [2024-06-11 12:26:37.063683] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.205 [2024-06-11 12:26:37.064151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.064489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.205 [2024-06-11 12:26:37.064501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.205 [2024-06-11 12:26:37.064511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.064659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.064807] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.064815] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.064822] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.067174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.076239] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.076827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.077166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.077181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.206 [2024-06-11 12:26:37.077190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.077372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.077501] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.077509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.077516] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.079688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.088706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.089303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.089540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.089554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.206 [2024-06-11 12:26:37.089563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.089726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.089894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.089903] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.089910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.092244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.101298] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.101907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.102118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.102133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.206 [2024-06-11 12:26:37.102142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.102324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.102476] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.102484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.102492] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.104712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.113668] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.114122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.114481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.114491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.206 [2024-06-11 12:26:37.114498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.114586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.114693] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.114701] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.114707] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.117010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.126148] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.126670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.126880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.126893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.206 [2024-06-11 12:26:37.126902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.127072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.127202] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.127210] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.127217] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.129490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.138433] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.138934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.139250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.139260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.206 [2024-06-11 12:26:37.139268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.139431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.139537] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.139548] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.139555] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.141952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.150829] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.151292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.151596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.151605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.206 [2024-06-11 12:26:37.151612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.206 [2024-06-11 12:26:37.151756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.206 [2024-06-11 12:26:37.151900] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.206 [2024-06-11 12:26:37.151908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.206 [2024-06-11 12:26:37.151914] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.206 [2024-06-11 12:26:37.154163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.206 [2024-06-11 12:26:37.163205] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.206 [2024-06-11 12:26:37.163706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.206 [2024-06-11 12:26:37.164068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.164083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.207 [2024-06-11 12:26:37.164092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.207 [2024-06-11 12:26:37.164255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.207 [2024-06-11 12:26:37.164403] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.207 [2024-06-11 12:26:37.164411] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.207 [2024-06-11 12:26:37.164418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.207 [2024-06-11 12:26:37.166824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.207 [2024-06-11 12:26:37.175656] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.207 [2024-06-11 12:26:37.176129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.176526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.176538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.207 [2024-06-11 12:26:37.176547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.207 [2024-06-11 12:26:37.176711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.207 [2024-06-11 12:26:37.176840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.207 [2024-06-11 12:26:37.176848] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.207 [2024-06-11 12:26:37.176859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.207 [2024-06-11 12:26:37.179168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.207 [2024-06-11 12:26:37.188121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.207 [2024-06-11 12:26:37.188329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.188683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.188693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.207 [2024-06-11 12:26:37.188700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.207 [2024-06-11 12:26:37.188844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.207 [2024-06-11 12:26:37.188988] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.207 [2024-06-11 12:26:37.188995] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.207 [2024-06-11 12:26:37.189002] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.207 [2024-06-11 12:26:37.191309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.207 [2024-06-11 12:26:37.200727] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.207 [2024-06-11 12:26:37.201347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.201711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.201725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.207 [2024-06-11 12:26:37.201734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.207 [2024-06-11 12:26:37.201897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.207 [2024-06-11 12:26:37.202053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.207 [2024-06-11 12:26:37.202062] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.207 [2024-06-11 12:26:37.202069] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.207 [2024-06-11 12:26:37.204303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.207 [2024-06-11 12:26:37.213266] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.207 [2024-06-11 12:26:37.213651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.213976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.213985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.207 [2024-06-11 12:26:37.213993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.207 [2024-06-11 12:26:37.214142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.207 [2024-06-11 12:26:37.214286] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.207 [2024-06-11 12:26:37.214294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.207 [2024-06-11 12:26:37.214300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.207 [2024-06-11 12:26:37.216567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.207 [2024-06-11 12:26:37.225883] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.207 [2024-06-11 12:26:37.226310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.226630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.207 [2024-06-11 12:26:37.226643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.207 [2024-06-11 12:26:37.226652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.207 [2024-06-11 12:26:37.226816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.207 [2024-06-11 12:26:37.226944] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.207 [2024-06-11 12:26:37.226952] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.207 [2024-06-11 12:26:37.226960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.207 [2024-06-11 12:26:37.229353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.470 [2024-06-11 12:26:37.238520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.470 [2024-06-11 12:26:37.238859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.239102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.239112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.470 [2024-06-11 12:26:37.239120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.470 [2024-06-11 12:26:37.239227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.470 [2024-06-11 12:26:37.239371] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.470 [2024-06-11 12:26:37.239379] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.470 [2024-06-11 12:26:37.239386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.470 [2024-06-11 12:26:37.241728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.470 [2024-06-11 12:26:37.251121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.470 [2024-06-11 12:26:37.251700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.252090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.252105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.470 [2024-06-11 12:26:37.252114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.470 [2024-06-11 12:26:37.252277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.470 [2024-06-11 12:26:37.252443] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.470 [2024-06-11 12:26:37.252451] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.470 [2024-06-11 12:26:37.252459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.470 [2024-06-11 12:26:37.254771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.470 [2024-06-11 12:26:37.263643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.470 [2024-06-11 12:26:37.264293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.264651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.264664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.470 [2024-06-11 12:26:37.264673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.470 [2024-06-11 12:26:37.264836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.470 [2024-06-11 12:26:37.264965] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.470 [2024-06-11 12:26:37.264973] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.470 [2024-06-11 12:26:37.264980] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.470 [2024-06-11 12:26:37.267296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.470 [2024-06-11 12:26:37.276334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.470 [2024-06-11 12:26:37.276882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.277243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.277258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.470 [2024-06-11 12:26:37.277267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.470 [2024-06-11 12:26:37.277449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.470 [2024-06-11 12:26:37.277615] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.470 [2024-06-11 12:26:37.277624] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.470 [2024-06-11 12:26:37.277631] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.470 [2024-06-11 12:26:37.280129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.470 [2024-06-11 12:26:37.288874] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.470 [2024-06-11 12:26:37.289256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.289574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.289583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.470 [2024-06-11 12:26:37.289591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.470 [2024-06-11 12:26:37.289773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.470 [2024-06-11 12:26:37.289897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.470 [2024-06-11 12:26:37.289905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.470 [2024-06-11 12:26:37.289912] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.470 [2024-06-11 12:26:37.292036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.470 [2024-06-11 12:26:37.301201] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.470 [2024-06-11 12:26:37.301707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.301915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.301924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.470 [2024-06-11 12:26:37.301932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.470 [2024-06-11 12:26:37.302080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.470 [2024-06-11 12:26:37.302262] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.470 [2024-06-11 12:26:37.302269] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.470 [2024-06-11 12:26:37.302276] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.470 [2024-06-11 12:26:37.304578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.470 [2024-06-11 12:26:37.313745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.470 [2024-06-11 12:26:37.314332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.314670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.470 [2024-06-11 12:26:37.314682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.470 [2024-06-11 12:26:37.314692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.470 [2024-06-11 12:26:37.314873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.315084] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.315093] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.315100] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.317411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.326431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.326783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.327107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.327117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.327125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.327250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.327375] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.327383] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.327390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.329415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.339058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.339651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.339983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.339996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.340009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.340179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.340364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.340372] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.340379] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.342652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.351592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.352216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.352577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.352590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.352599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.352686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.352853] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.352861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.352868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.355220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.364014] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.364518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.364827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.364837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.364844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.364970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.365119] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.365128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.365134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.367567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.376473] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.376776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.377139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.377150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.377157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.377269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.377376] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.377383] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.377390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.379892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.388995] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.389500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.389718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.389727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.389734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.389860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.390028] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.390037] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.390043] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.392195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.401617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.401996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.402282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.402291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.402299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.402462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.471 [2024-06-11 12:26:37.402588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.471 [2024-06-11 12:26:37.402595] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.471 [2024-06-11 12:26:37.402601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.471 [2024-06-11 12:26:37.404868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.471 [2024-06-11 12:26:37.414288] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.471 [2024-06-11 12:26:37.414755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.415100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.471 [2024-06-11 12:26:37.415110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.471 [2024-06-11 12:26:37.415118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.471 [2024-06-11 12:26:37.415224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.415375] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.415383] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.415390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.472 [2024-06-11 12:26:37.417634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.472 [2024-06-11 12:26:37.426706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.472 [2024-06-11 12:26:37.427046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.427369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.427379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.472 [2024-06-11 12:26:37.427386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.472 [2024-06-11 12:26:37.427492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.427636] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.427644] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.427650] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.472 [2024-06-11 12:26:37.430159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.472 [2024-06-11 12:26:37.439095] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.472 [2024-06-11 12:26:37.439589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.439899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.439908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.472 [2024-06-11 12:26:37.439915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.472 [2024-06-11 12:26:37.440081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.440225] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.440233] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.440240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.472 [2024-06-11 12:26:37.442651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.472 [2024-06-11 12:26:37.451526] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.472 [2024-06-11 12:26:37.451990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.452286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.452296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.472 [2024-06-11 12:26:37.452304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.472 [2024-06-11 12:26:37.452429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.452573] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.452584] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.452591] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.472 [2024-06-11 12:26:37.454968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.472 [2024-06-11 12:26:37.463930] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.472 [2024-06-11 12:26:37.464389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.464710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.464719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.472 [2024-06-11 12:26:37.464726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.472 [2024-06-11 12:26:37.464869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.464994] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.465002] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.465008] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.472 [2024-06-11 12:26:37.467367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.472 [2024-06-11 12:26:37.476355] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.472 [2024-06-11 12:26:37.476855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.477059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.477070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.472 [2024-06-11 12:26:37.477077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.472 [2024-06-11 12:26:37.477201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.477364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.477372] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.477379] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.472 [2024-06-11 12:26:37.479633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.472 [2024-06-11 12:26:37.488854] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.472 [2024-06-11 12:26:37.489071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.489378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.489389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.472 [2024-06-11 12:26:37.489396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.472 [2024-06-11 12:26:37.489539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.489646] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.489653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.489663] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.472 [2024-06-11 12:26:37.491838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.472 [2024-06-11 12:26:37.501336] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.472 [2024-06-11 12:26:37.501807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.501982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.472 [2024-06-11 12:26:37.501991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.472 [2024-06-11 12:26:37.501998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.472 [2024-06-11 12:26:37.502146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.472 [2024-06-11 12:26:37.502347] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.472 [2024-06-11 12:26:37.502355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.472 [2024-06-11 12:26:37.502362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.734 [2024-06-11 12:26:37.504568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.734 [2024-06-11 12:26:37.513566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.734 [2024-06-11 12:26:37.513945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.734 [2024-06-11 12:26:37.514282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.514292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.735 [2024-06-11 12:26:37.514300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.735 [2024-06-11 12:26:37.514480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.735 [2024-06-11 12:26:37.514642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.735 [2024-06-11 12:26:37.514650] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.735 [2024-06-11 12:26:37.514656] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.735 [2024-06-11 12:26:37.517091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.735 [2024-06-11 12:26:37.526160] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.735 [2024-06-11 12:26:37.526618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.526922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.526931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.735 [2024-06-11 12:26:37.526938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.735 [2024-06-11 12:26:37.527049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.735 [2024-06-11 12:26:37.527193] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.735 [2024-06-11 12:26:37.527201] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.735 [2024-06-11 12:26:37.527207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.735 [2024-06-11 12:26:37.529401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.735 12:26:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:24.735 12:26:37 -- common/autotest_common.sh@852 -- # return 0 00:32:24.735 12:26:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:24.735 12:26:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:24.735 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.735 [2024-06-11 12:26:37.538577] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.735 [2024-06-11 12:26:37.538885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.539221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.539231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.735 [2024-06-11 12:26:37.539238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.735 [2024-06-11 12:26:37.539400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.735 [2024-06-11 12:26:37.539563] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.735 [2024-06-11 12:26:37.539571] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.735 [2024-06-11 12:26:37.539578] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.735 [2024-06-11 12:26:37.541802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.735 [2024-06-11 12:26:37.551038] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.735 [2024-06-11 12:26:37.551427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.551624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.551633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.735 [2024-06-11 12:26:37.551640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.735 [2024-06-11 12:26:37.551783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.735 [2024-06-11 12:26:37.551927] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.735 [2024-06-11 12:26:37.551935] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.735 [2024-06-11 12:26:37.551942] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.735 [2024-06-11 12:26:37.554001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.735 [2024-06-11 12:26:37.563490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.735 [2024-06-11 12:26:37.563941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.564321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.564331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.735 [2024-06-11 12:26:37.564338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.735 [2024-06-11 12:26:37.564482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.735 [2024-06-11 12:26:37.564588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.735 [2024-06-11 12:26:37.564596] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.735 [2024-06-11 12:26:37.564607] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.735 [2024-06-11 12:26:37.566929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.735 [2024-06-11 12:26:37.576154] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.735 [2024-06-11 12:26:37.576595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.576899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.576908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.735 [2024-06-11 12:26:37.576915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.735 [2024-06-11 12:26:37.577044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.735 [2024-06-11 12:26:37.577187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.735 [2024-06-11 12:26:37.577195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.735 [2024-06-11 12:26:37.577201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.735 12:26:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.735 12:26:37 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.735 12:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:24.735 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.735 [2024-06-11 12:26:37.579843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.735 [2024-06-11 12:26:37.581721] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.735 12:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.735 12:26:37 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:24.735 12:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:24.735 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.735 [2024-06-11 12:26:37.588755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.735 [2024-06-11 12:26:37.589215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.589418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.589428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.735 [2024-06-11 12:26:37.589435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.735 [2024-06-11 12:26:37.589615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.735 [2024-06-11 12:26:37.589777] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.735 [2024-06-11 12:26:37.589785] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.735 [2024-06-11 12:26:37.589791] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.735 [2024-06-11 12:26:37.591966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.735 [2024-06-11 12:26:37.601145] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.735 [2024-06-11 12:26:37.601556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.601876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.735 [2024-06-11 12:26:37.601886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.736 [2024-06-11 12:26:37.601896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.736 [2024-06-11 12:26:37.602025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.736 [2024-06-11 12:26:37.602169] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.736 [2024-06-11 12:26:37.602176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.736 [2024-06-11 12:26:37.602183] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.736 [2024-06-11 12:26:37.604465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.736 Malloc0 00:32:24.736 [2024-06-11 12:26:37.613859] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.736 [2024-06-11 12:26:37.614301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.736 12:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.736 [2024-06-11 12:26:37.614633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.736 [2024-06-11 12:26:37.614646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.736 [2024-06-11 12:26:37.614656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.736 12:26:37 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.736 [2024-06-11 12:26:37.614804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.736 [2024-06-11 12:26:37.614952] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.736 [2024-06-11 12:26:37.614960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.736 [2024-06-11 12:26:37.614968] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.736 12:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:24.736 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.736 [2024-06-11 12:26:37.616984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.736 12:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.736 [2024-06-11 12:26:37.626236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.736 12:26:37 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:24.736 [2024-06-11 12:26:37.626815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.736 12:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:24.736 [2024-06-11 12:26:37.627058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.736 [2024-06-11 12:26:37.627073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.736 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.736 [2024-06-11 12:26:37.627082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.736 [2024-06-11 12:26:37.627245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.736 [2024-06-11 12:26:37.627355] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.736 [2024-06-11 12:26:37.627364] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.736 [2024-06-11 12:26:37.627371] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.736 [2024-06-11 12:26:37.629552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.736 12:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.736 [2024-06-11 12:26:37.638505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.736 12:26:37 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.736 [2024-06-11 12:26:37.638840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.736 12:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:24.736 [2024-06-11 12:26:37.639022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.736 [2024-06-11 12:26:37.639033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xedc4a0 with addr=10.0.0.2, port=4420 00:32:24.736 [2024-06-11 12:26:37.639040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedc4a0 is same with the state(5) to be set 00:32:24.736 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.736 [2024-06-11 12:26:37.639222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc4a0 (9): Bad file descriptor 00:32:24.736 [2024-06-11 12:26:37.639366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:24.736 [2024-06-11 12:26:37.639375] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:24.736 [2024-06-11 12:26:37.639382] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.736 [2024-06-11 12:26:37.641520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.736 [2024-06-11 12:26:37.645333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.736 12:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.736 12:26:37 -- host/bdevperf.sh@38 -- # wait 1697673 00:32:24.736 [2024-06-11 12:26:37.651176] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.736 [2024-06-11 12:26:37.681890] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:34.756 00:32:34.756 Latency(us) 00:32:34.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:34.757 Verification LBA range: start 0x0 length 0x4000 00:32:34.757 Nvme1n1 : 15.00 14511.76 56.69 14624.80 0.00 4378.20 529.07 22391.47 00:32:34.757 =================================================================================================================== 00:32:34.757 Total : 14511.76 56.69 14624.80 0.00 4378.20 529.07 22391.47 00:32:34.757 12:26:46 -- host/bdevperf.sh@39 -- # sync 00:32:34.757 12:26:46 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:34.757 12:26:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.757 12:26:46 -- common/autotest_common.sh@10 -- # set +x 00:32:34.757 12:26:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.757 12:26:46 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:34.757 12:26:46 -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:34.757 12:26:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:34.757 12:26:46 -- nvmf/common.sh@116 -- # sync 00:32:34.757 12:26:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:34.757 12:26:46 -- nvmf/common.sh@119 -- # set +e 00:32:34.757 12:26:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:34.757 12:26:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:34.757 rmmod nvme_tcp 00:32:34.757 rmmod nvme_fabrics 00:32:34.757 rmmod nvme_keyring 00:32:34.757 12:26:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:34.757 12:26:46 -- nvmf/common.sh@123 -- # set -e 00:32:34.757 12:26:46 -- nvmf/common.sh@124 -- # return 0 00:32:34.757 12:26:46 -- nvmf/common.sh@477 -- # '[' -n 1698706 ']' 00:32:34.757 12:26:46 -- nvmf/common.sh@478 -- # killprocess 1698706 00:32:34.757 12:26:46 -- common/autotest_common.sh@926 -- # '[' -z 1698706 ']' 00:32:34.757 12:26:46 -- common/autotest_common.sh@930 -- # kill -0 1698706 00:32:34.757 12:26:46 -- common/autotest_common.sh@931 -- # uname 00:32:34.757 12:26:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:34.757 12:26:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1698706 00:32:34.757 12:26:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:34.757 12:26:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:34.757 12:26:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1698706' 00:32:34.757 killing process with pid 1698706 00:32:34.757 12:26:46 -- common/autotest_common.sh@945 -- # kill 1698706 00:32:34.757 12:26:46 -- common/autotest_common.sh@950 -- # wait 1698706 00:32:34.757 12:26:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:34.757 12:26:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:34.757 12:26:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:34.757 12:26:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:34.757 12:26:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:34.757 12:26:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.757 12:26:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:34.757 12:26:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.698 12:26:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:35.698 00:32:35.698 real 0m27.555s 00:32:35.698 user 1m2.129s 00:32:35.698 sys 0m7.156s 00:32:35.698 12:26:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:35.698 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:32:35.698 ************************************ 00:32:35.698 END TEST nvmf_bdevperf 00:32:35.698 ************************************ 00:32:35.698 12:26:48 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:35.698 12:26:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:35.698 12:26:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:35.698 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:32:35.698 ************************************ 00:32:35.698 START TEST nvmf_target_disconnect 00:32:35.698 ************************************ 00:32:35.698 12:26:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:35.959 * Looking for test storage... 00:32:35.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:35.959 12:26:48 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.959 12:26:48 -- nvmf/common.sh@7 -- # uname -s 00:32:35.959 12:26:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.959 12:26:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.959 12:26:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.959 12:26:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.959 12:26:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.959 12:26:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.959 12:26:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.959 12:26:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.959 12:26:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.959 12:26:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.959 12:26:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:35.959 12:26:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:35.959 12:26:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.959 12:26:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.959 12:26:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:35.959 12:26:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.959 12:26:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.959 12:26:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.959 12:26:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.959 12:26:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.959 12:26:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.959 12:26:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.959 12:26:48 -- paths/export.sh@5 -- # export PATH 00:32:35.960 12:26:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.960 12:26:48 -- nvmf/common.sh@46 -- # : 0 00:32:35.960 12:26:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:35.960 12:26:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:35.960 12:26:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:35.960 12:26:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.960 12:26:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.960 12:26:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:35.960 12:26:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:35.960 12:26:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:35.960 12:26:48 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:35.960 12:26:48 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:35.960 12:26:48 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:35.960 12:26:48 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:35.960 12:26:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:35.960 12:26:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.960 12:26:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:35.960 12:26:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:35.960 12:26:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:35.960 12:26:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.960 12:26:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:35.960 12:26:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.960 12:26:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:35.960 12:26:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:35.960 12:26:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:35.960 12:26:48 -- common/autotest_common.sh@10 -- # set +x 00:32:42.550 12:26:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:42.550 12:26:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:42.550 12:26:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:42.550 12:26:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:42.550 12:26:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:42.550 12:26:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:42.550 12:26:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:42.550 12:26:55 -- nvmf/common.sh@294 -- # net_devs=() 00:32:42.550 12:26:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:42.550 12:26:55 -- nvmf/common.sh@295 -- # e810=() 00:32:42.550 12:26:55 -- nvmf/common.sh@295 -- # local -ga e810 00:32:42.550 12:26:55 -- nvmf/common.sh@296 -- # x722=() 00:32:42.550 12:26:55 -- nvmf/common.sh@296 -- # local -ga x722 00:32:42.550 12:26:55 -- nvmf/common.sh@297 -- # mlx=() 00:32:42.550 12:26:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:42.550 12:26:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.550 12:26:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:42.550 12:26:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:42.550 12:26:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:42.550 12:26:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:42.550 12:26:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:42.550 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:42.550 12:26:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:42.550 12:26:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:42.550 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:42.550 12:26:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:42.550 12:26:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:42.550 12:26:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.550 12:26:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:42.550 12:26:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.550 12:26:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:42.550 Found net devices under 0000:31:00.0: cvl_0_0 00:32:42.550 12:26:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.550 12:26:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:42.550 12:26:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.550 12:26:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:42.550 12:26:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.550 12:26:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:42.550 Found net devices under 0000:31:00.1: cvl_0_1 00:32:42.550 12:26:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.550 12:26:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:42.550 12:26:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:42.550 12:26:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:42.550 12:26:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:42.550 12:26:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.550 12:26:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.550 12:26:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.550 12:26:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:42.550 12:26:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.550 12:26:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.550 12:26:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:42.550 12:26:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.550 12:26:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.550 12:26:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:42.550 12:26:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:42.550 12:26:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.550 12:26:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.812 12:26:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.812 12:26:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.812 12:26:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:42.812 12:26:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.812 12:26:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.812 12:26:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.812 12:26:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:42.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:32:42.812 00:32:42.812 --- 10.0.0.2 ping statistics --- 00:32:42.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.812 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:32:42.812 12:26:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:32:42.812 00:32:42.812 --- 10.0.0.1 ping statistics --- 00:32:42.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.812 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:42.812 12:26:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.812 12:26:55 -- nvmf/common.sh@410 -- # return 0 00:32:42.812 12:26:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:42.812 12:26:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.812 12:26:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:42.812 12:26:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:42.812 12:26:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.812 12:26:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:42.812 12:26:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:42.812 12:26:55 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:42.812 12:26:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:42.812 12:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:42.812 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:32:42.812 ************************************ 00:32:42.812 START TEST nvmf_target_disconnect_tc1 00:32:42.812 ************************************ 00:32:42.812 12:26:55 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:32:42.812 12:26:55 -- host/target_disconnect.sh@32 -- # set +e 00:32:42.812 12:26:55 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:43.072 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.072 [2024-06-11 12:26:55.892174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.072 [2024-06-11 12:26:55.892515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.072 [2024-06-11 12:26:55.892528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdded10 with addr=10.0.0.2, port=4420 00:32:43.072 [2024-06-11 12:26:55.892547] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:43.072 [2024-06-11 12:26:55.892557] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:43.072 [2024-06-11 12:26:55.892564] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:43.073 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:43.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:43.073 Initializing NVMe Controllers 00:32:43.073 12:26:55 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:43.073 12:26:55 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:43.073 12:26:55 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:32:43.073 12:26:55 -- common/autotest_common.sh@1132 -- # return 0 00:32:43.073 12:26:55 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:43.073 12:26:55 -- host/target_disconnect.sh@41 -- # set -e 00:32:43.073 00:32:43.073 real 0m0.107s 00:32:43.073 user 0m0.043s 00:32:43.073 sys 0m0.064s 00:32:43.073 12:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:43.073 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:32:43.073 ************************************ 00:32:43.073 END TEST nvmf_target_disconnect_tc1 00:32:43.073 ************************************ 00:32:43.073 12:26:55 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:43.073 12:26:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:43.073 12:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:43.073 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:32:43.073 ************************************ 00:32:43.073 START TEST nvmf_target_disconnect_tc2 00:32:43.073 ************************************ 00:32:43.073 12:26:55 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:32:43.073 12:26:55 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:43.073 12:26:55 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:43.073 12:26:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:43.073 12:26:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:43.073 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:32:43.073 12:26:55 -- nvmf/common.sh@469 -- # nvmfpid=1704836 00:32:43.073 12:26:55 -- nvmf/common.sh@470 -- # waitforlisten 1704836 00:32:43.073 12:26:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:43.073 12:26:55 -- common/autotest_common.sh@819 -- # '[' -z 1704836 ']' 00:32:43.073 12:26:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.073 12:26:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:43.073 12:26:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.073 12:26:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:43.073 12:26:55 -- common/autotest_common.sh@10 -- # set +x 00:32:43.073 [2024-06-11 12:26:55.999902] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:43.073 [2024-06-11 12:26:55.999947] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.073 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.073 [2024-06-11 12:26:56.083357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:43.333 [2024-06-11 12:26:56.113614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:43.333 [2024-06-11 12:26:56.113737] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.333 [2024-06-11 12:26:56.113744] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.333 [2024-06-11 12:26:56.113752] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.333 [2024-06-11 12:26:56.113888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:43.333 [2024-06-11 12:26:56.114064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:43.333 [2024-06-11 12:26:56.114365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:43.333 [2024-06-11 12:26:56.114366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:43.905 12:26:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:43.905 12:26:56 -- common/autotest_common.sh@852 -- # return 0 00:32:43.905 12:26:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:43.905 12:26:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.905 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:43.905 12:26:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.905 12:26:56 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.905 12:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.905 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:43.905 Malloc0 00:32:43.905 12:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.905 12:26:56 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:43.905 12:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.905 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:43.905 [2024-06-11 12:26:56.833344] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.905 12:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.905 12:26:56 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.905 12:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.905 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:43.905 12:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.905 12:26:56 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.905 12:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.905 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:43.905 12:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.905 12:26:56 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.905 12:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.905 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:43.905 [2024-06-11 12:26:56.873599] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.905 12:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.905 12:26:56 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.905 12:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.905 12:26:56 -- common/autotest_common.sh@10 -- # set +x 00:32:43.905 12:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.905 12:26:56 -- host/target_disconnect.sh@50 -- # reconnectpid=1705009 00:32:43.905 12:26:56 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:43.905 12:26:56 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:44.166 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.086 12:26:58 -- host/target_disconnect.sh@53 -- # kill -9 1704836 00:32:46.086 12:26:58 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Write completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 Read completed with error (sct=0, sc=8) 00:32:46.086 starting I/O failed 00:32:46.086 [2024-06-11 12:26:58.905539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:46.086 [2024-06-11 12:26:58.905790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.906264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.906291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.906625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.906972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.906980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.907423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.907755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.907765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.907998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.908259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.908286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.908503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.908651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.908659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.908972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.909322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.909329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.909646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.909956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.909963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.910175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.910442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-06-11 12:26:58.910450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-06-11 12:26:58.910747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.910999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.911007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.911404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.911707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.911714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.912008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.912395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.912403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.912659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.912938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.912945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.913247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.913569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.913577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.913876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.914098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.914106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.914295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.914633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.914639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.914923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.915224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.915230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.915546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.915866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.915872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.916061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.916422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.916429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.916719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.917042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.917049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.917133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.917469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.917476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.917692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.917886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.917892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.918262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.918599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.918605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.918789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.919068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.919075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.919369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.919704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.919710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.920026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.920344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.920350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.920628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.920821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.920828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.921154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.921455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.921461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.921667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.921992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.921998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.922424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.922721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.922727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.923059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.923352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.923361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.923652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.923926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.923933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.924051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.924408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.924415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.924696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.925001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.925007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.925305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.925565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.925572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.925869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.926232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.926239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.926551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.926838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.926844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-06-11 12:26:58.927175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.927495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-06-11 12:26:58.927501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.927817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.928144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.928150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.928250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.928545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.928552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.928859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.929207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.929213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.929564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.929889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.929895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.930092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.930160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.930166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.930360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.930633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.930640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.930944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.931121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.931127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.931356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.931653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.931659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.931828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.932071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.932078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.932401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.932670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.932676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.932762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.933034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.933041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.933324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.933628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.933634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.933950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.934270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.934276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.934592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.934798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.934804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.934991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.935288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.935295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.935507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.935682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.935689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.936007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.936384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.936391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.936555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.936847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.936854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.937213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.937506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.937512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.937858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.938136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.938143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.938421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.938630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.938637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.938984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.939346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.939353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.939531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.939862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.939869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.940157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.940483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.940490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.940666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.940963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.940969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.941242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.941560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.941567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.941851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.942043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.942050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.942409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.942707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.942713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-06-11 12:26:58.943023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.943237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-06-11 12:26:58.943243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.943563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.943906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.943913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.944218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.944530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.944536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.944871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.945186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.945193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.945499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.945672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.945678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.945935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.946337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.946344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.946497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.946816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.946822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.947052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.947362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.947369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.947680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.947874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.947881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.948220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.948547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.948553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.948884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.949205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.949212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.949390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.949682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.949689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.950000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.950385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.950391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.950705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.951014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.951029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.951380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.951692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.951698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.952133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.952278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.952285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.952624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.952962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.952968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.953284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.953449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.953456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.953727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.954033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.954040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.954357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.954656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.954663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.954838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.955057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.955065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.955350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.955575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.955581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.955757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.956062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.956069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.956390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.956746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.956752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.957039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.957292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.957298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.957480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.957634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.957643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.957862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.958080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.958087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.958449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.958778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.958784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.958980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.959159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.959166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-06-11 12:26:58.959453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-06-11 12:26:58.959758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.959764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.959914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.960312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.960319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.960601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.960942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.960949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.961164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.961434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.961440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.961793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.962096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.962103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.962441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.962669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.962675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.962966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.963295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.963304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.963593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.963925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.963931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.964238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.964585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.964591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.964922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.965320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.965327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.965515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.965601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.965608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.965828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.966145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.966152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.966231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.966510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.966516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.966792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.967098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.967104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.967457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.967787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.967794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.968099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.968466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.968472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.968786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.968947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.968956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.969305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.969576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.969583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.969937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.970221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.970228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.970526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.970865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.970872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.971189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.971505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.971511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.971720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.972014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.972025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.972215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.972551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.972558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.972885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.973200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.973207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.973439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.973708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.973715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-06-11 12:26:58.974042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-06-11 12:26:58.974336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.974343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.974538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.974869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.974877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.975079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.975394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.975400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.975635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.975971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.975977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.976344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.976655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.976661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.976949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.977214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.977220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.977518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.977817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.977823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.978151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.978338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.978345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.978652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.978974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.978981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.979346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.979660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.979667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.980011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.980238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.980245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.980575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.980837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.980850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.981034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.981271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.981278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.981572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.981874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.981881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.982205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.982380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.982387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.982708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.983030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.983037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.983369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.983686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.983693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.983897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.984175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.984182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.984575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.984611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.984618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.984888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.985227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.985235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.985432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.985703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.985710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.986013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.986110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.986117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.986414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.986590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.986597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.986900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.987098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.987106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-06-11 12:26:58.987440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.987752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-06-11 12:26:58.987759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.987950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.988236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.988243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.988549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.988875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.988882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.989139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.989441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.989448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.989747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.990034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.990042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.990320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.990641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.990648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.990907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.991126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.991134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.991447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.991748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.991755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.992046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.992392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.992400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.992621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.992888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.992895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.993058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.993362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.993370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.993657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.993979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.993986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.994272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.994646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.994654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.994956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.995302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.995309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.995613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.995943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.995949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.996159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.996418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.996424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.996755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.997047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.997054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.997180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.997360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.997366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.997548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.997818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.997824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.998114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.998461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.998467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.998681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.998940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.998946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.999285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.999577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:58.999583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:58.999909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.000237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.000244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:59.000549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.000825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.000832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:59.001040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.001245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.001251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:59.001586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.001735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.001741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:59.002047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.002341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.002348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:59.002572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.002838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.002845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:59.003162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.003389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.003395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-06-11 12:26:59.003481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.003775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-06-11 12:26:59.003782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.004021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.004306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.004312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.004485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.004752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.004759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.005042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.005204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.005212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.005513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.005695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.005703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.006010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.006292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.006299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.006611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.006947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.006953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.007254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.007445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.007452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.007762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.008059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.008065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.008412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.008737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.008743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.008909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.009261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.009268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.009585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.009907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.009913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.010229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.010545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.010551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.010861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.011047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.011054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.011348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.011571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.011577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.011732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.012022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.012029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.012340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.012636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.012643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.012967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.013152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.013158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.013501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.013794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.013801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.014090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.014314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.014320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.014522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.014860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.014866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.015127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.015425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.015432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.015745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.016053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.016059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.016439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.016607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.016614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.016967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.017170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.017177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.017499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.017749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.017755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.018105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.018429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.018435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.018732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.019049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.019056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.019396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.019693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.019699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-06-11 12:26:59.020022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-06-11 12:26:59.020243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.020249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.020568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.020880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.020886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.021190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.021523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.021529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.021889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.022266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.022272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.022591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.022790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.022796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.022954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.023263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.023270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.023445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.023750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.023757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.024065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.024376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.024383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.024706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.025083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.025090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.025376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.025713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.025719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.026005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.026281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.026288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.026574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.026841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.026847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.027148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.027478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.027484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.027778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.028138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.028144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.028462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.028762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.028768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.029067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.029145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.029151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.029429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.029632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.029638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.029960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.030327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.030334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.030646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.030965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.030971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.031191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.031532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.031538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.031853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.032166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.032174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.032323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.032597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.032603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.032910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.033225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.033232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.033461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.033714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.033720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.033950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.034190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.034196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.034505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.034711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.034717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.035075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.035275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.035280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.035602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.035949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.035956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.036268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.036560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.036566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-06-11 12:26:59.036887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-06-11 12:26:59.037153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.037159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.037472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.037789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.037797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.038091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.038417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.038424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.038733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.038919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.038926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.039110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.039372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.039379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.039676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.039998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.040005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.040240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.040442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.040449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.040829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.041003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.041011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.041317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.041643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.041650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.041960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.042277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.042283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.042465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.042820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.042826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.042996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.043300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.043309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.043600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.043915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.043922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.044207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.044418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.044424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.044717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.045088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.045094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.045395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.045732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.045738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.046026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.046315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.046322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.046654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.046965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.046972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.047266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.047571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.047578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.047874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.048191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.048198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.048529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.048838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.048851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.049111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.049417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.049426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.049707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.050029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.050037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.050322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.050640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.050647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.050943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.051279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.051286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-06-11 12:26:59.051596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-06-11 12:26:59.051917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.051923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.052072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.052444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.052451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.052743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.053023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.053030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.053241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.053535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.053541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.053819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.054140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.054146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.054458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.054775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.054782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.055035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.055338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.055346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.055632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.055922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.055928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.056210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.056487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.056493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.056812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.057117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.057125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.057424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.057511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.057518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.057825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.058115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.058122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.058454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.058741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.058748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.059057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.059358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.059365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.059669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.059985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.059991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.060321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.060517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.060523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.060698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.060969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.060978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.061400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.061553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.061560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.061841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.062157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.062163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.062592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.062887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.062893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.063166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.063493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.063500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.063560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.063844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.063851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.064157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.064480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.064486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.064812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.065128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.065134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.065458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.065772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.065780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.066094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.066407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.066414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.066738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.066929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.066936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.067245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.067559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.067566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.067919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.068247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-06-11 12:26:59.068254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-06-11 12:26:59.068559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.068740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.068746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.068968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.069307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.069313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.069570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.069901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.069908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.070230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.070545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.070552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.070756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.071068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.071075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.071279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.071430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.071437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.071755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.072094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.072101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.072434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.072769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.072776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.073107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.073403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.073410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.073703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.073903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.073910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.074228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.074550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.074557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.074752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.075066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.075072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.075379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.075659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.075665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.075955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.076123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.076130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.076491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.076811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.076817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.077009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.077336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.077343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.077646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.077868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.077875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.078166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.078381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.078388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.078697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.079014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.079024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.079334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.079626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.079632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.079980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.080303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.080310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.080465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.080881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.080887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.081175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.081487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.081493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.081804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.082101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.082108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.082384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.082694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.082701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.082857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.083146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.083153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.083458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.083674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.083681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.083979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.084265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.084272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.084585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.084911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-06-11 12:26:59.084918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-06-11 12:26:59.085200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.085500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.085506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.085802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.085982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.085989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.086303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.086588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.086594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.086922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.087087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.087094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.087360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.087696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.087702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.088001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.088285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.088292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.088486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.088816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.088823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.089129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.089443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.089449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.089785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.089877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.089883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.090154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.090449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.090455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.090768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.091073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.091080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.091286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.091603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.091609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.091933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.092238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.092244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.092535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.092857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.092864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.093168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.093341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.093347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.093708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.094039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.094045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.094335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.094633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.094639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.094923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.095253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.095260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.095557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.095877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.095884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.096185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.096490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.096496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.096648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.096993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.097000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.097328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.097504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.097511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.097709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.098019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.098025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.098391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.098570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.098577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.098872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.099155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.099161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.099497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.099810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.099817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.100117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.100303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.100309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.100518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.100814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.100820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.101201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.101350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-06-11 12:26:59.101357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-06-11 12:26:59.101620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.101953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.101959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.102256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.102575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.102581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.102868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.103152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.103158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.103316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.103594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.103600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.103905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.104224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.104230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.104606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.104944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.104950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.105270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.105586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.105593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.105929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.106231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.106238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.106525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.106823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.106829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.107008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.107363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.107369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.107681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.107971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.107977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.108292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.108587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.108594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.108877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.109190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.109197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.109506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.109699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.109706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.110024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.110331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.110339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.110643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.110856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.110863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.111162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.111463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.111477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.111783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.112101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.112110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.112417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.112711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.112719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-06-11 12:26:59.113007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.113167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-06-11 12:26:59.113174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.113372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.113712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.113719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.114030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.114319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.114326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.114639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.114797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.114804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.115124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.115416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.115423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.115738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.116066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.116073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.116364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.116658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.116665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.116954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.117322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.117328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.117619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.117911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.117918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.118154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.118361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.118367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.118687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.118892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.118898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.119185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.119510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.119517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.119866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.120148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.120155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.120462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.120778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.120785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.121168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.121434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.121441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.121743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.122068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.122075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.122401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.122734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.122742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.123016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.123341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.123348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.123658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.123967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.123975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.124298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.124595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.124602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.124934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.125242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.125248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.125575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.125867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.370 [2024-06-11 12:26:59.125873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.370 qpair failed and we were unable to recover it. 00:32:46.370 [2024-06-11 12:26:59.126115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.126449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.126456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.126738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.127064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.127071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.127370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.127684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.127690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.128001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.128186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.128195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.128474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.128763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.128771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.129079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.129407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.129415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.129711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.130027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.130034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.130311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.130635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.130641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.130949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.131265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.131273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.131579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.131872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.131878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.132166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.132344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.132351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.132625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.132797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.132803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.133088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.133309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.133315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.133481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.133754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.133760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.133922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.134295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.134301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.134610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.134915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.134921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.135314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.135578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.135584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.135791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.136112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.136118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.136412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.136716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.136722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.137012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.137306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.137314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.137611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.137930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.137936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.138182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.138394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.138400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.138685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.139011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.139022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.139328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.139606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.139612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.139980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.140268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.140275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.140586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.140901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.140907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.371 [2024-06-11 12:26:59.141185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.141504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.371 [2024-06-11 12:26:59.141510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.371 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.141796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.141986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.141992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.142208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.142546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.142552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.142865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.143050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.143059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.143361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.143655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.143661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.144003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.144292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.144299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.144480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.144860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.144866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.145237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.145538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.145544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.145878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.146177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.146184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.146391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.146673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.146679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.147002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.147294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.147301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.147609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.147931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.147938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.148245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.148567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.148573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.148900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.149188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.149196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.149260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.149414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.149421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.149772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.150036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.150043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.150282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.150583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.150589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.150806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.151075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.151082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.151389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.151713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.151719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.151904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.152104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.152110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.152400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.152708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.152715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.153006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.153294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.153308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.153621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.153937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.153944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.154262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.154575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.154583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.154775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.155037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.155044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.155318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.155638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.155644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.155957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.156277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.156283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.156602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.156896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.156902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.157193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.157515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.372 [2024-06-11 12:26:59.157522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.372 qpair failed and we were unable to recover it. 00:32:46.372 [2024-06-11 12:26:59.157824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.158147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.158153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.158457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.158742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.158748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.159056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.159403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.159410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.159707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.160041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.160049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.160371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.160692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.160698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.161044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.161345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.161351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.161725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.161974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.161980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.162306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.162703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.162709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.162866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.163220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.163227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.163531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.163713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.163721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.164023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.164186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.164193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.164490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.164652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.164659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.164963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.165285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.165291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.165590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.165903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.165909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.166183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.166512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.166518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.166889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.167079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.167085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.167375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.167691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.167698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.167850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.168173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.168180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.168482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.168683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.168689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.168996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.169324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.169330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.169618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.169948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.169954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.170256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.170576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.170583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.170892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.171184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.171191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.171485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.171797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.171804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.171973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.172146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.172153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.172454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.172772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.172779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.173088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.173268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.173275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.173608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.173946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.173953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.373 qpair failed and we were unable to recover it. 00:32:46.373 [2024-06-11 12:26:59.174271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.174589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.373 [2024-06-11 12:26:59.174596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.174751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.175028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.175037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.175316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.175623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.175629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.175974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.176189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.176196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.176512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.176843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.176849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.177052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.177396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.177402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.177692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.178026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.178033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.178405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.178758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.178765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.179090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.179405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.179411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.179721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.180034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.180040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.180318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.180639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.180645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.180818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.181010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.181016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.181359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.181693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.181699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.181989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.182340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.182346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.182730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.183038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.183044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.183358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.183671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.183678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.183970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.184255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.184261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.184572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.184861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.184868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.185167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.185474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.185480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.185787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.185972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.185979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.186301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.186604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.186611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.186925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.187228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.187235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.187545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.187850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.187856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.374 qpair failed and we were unable to recover it. 00:32:46.374 [2024-06-11 12:26:59.188150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.188477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.374 [2024-06-11 12:26:59.188483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.188686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.189007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.189013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.189305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.189596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.189602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.189909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.190218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.190225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.190528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.190846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.190852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.191054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.191335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.191341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.191652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.191888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.191895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.192230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.192521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.192528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.192830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.192996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.193004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.193304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.193618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.193625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.193933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.194124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.194132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.194456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.194770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.194778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.195074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.195268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.195275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.195660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.195803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.195810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.196126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.196434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.196440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.196722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.197039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.197045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.197356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.197678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.197684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.197972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.198301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.198307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.198620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.198800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.198807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.198964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.199261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.199269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.199627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.199921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.199928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.200220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.200545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.200552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.200860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.201176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.201183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.201505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.201809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.201815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.202149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.202371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.202378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.202593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.202958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.202964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.203262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.203559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.203565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.203842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.204127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.204134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.375 qpair failed and we were unable to recover it. 00:32:46.375 [2024-06-11 12:26:59.204440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.204750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.375 [2024-06-11 12:26:59.204755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.205047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.205367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.205373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.205564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.205789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.205796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.205950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.206277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.206284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.206573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.206894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.206901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.207202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.207506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.207512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.207797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.208120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.208127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.208293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.208647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.208653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.208954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.209264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.209271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.209639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.209829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.209836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.210163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.210470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.210477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.210785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.211113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.211120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.211327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.211682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.211689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.212004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.212315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.212321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.212634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.212949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.212956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.213240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.213558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.213564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.213853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.214172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.214178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.214481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.214778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.214784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.214998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.215299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.215306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.215613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.215936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.215943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.216244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.216560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.216567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.216895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.217173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.217179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.217482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.217796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.217802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.218110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.218402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.218408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.218695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.219003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.219009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.219323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.219603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.219609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.219917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.220249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.220256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.220414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.220685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.220691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.221002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.221321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.376 [2024-06-11 12:26:59.221328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.376 qpair failed and we were unable to recover it. 00:32:46.376 [2024-06-11 12:26:59.221521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.221798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.221804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.222119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.222298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.222306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.222606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.222811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.222818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.223102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.223319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.223326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.223655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.223971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.223977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.224291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.224473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.224479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.224778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.224939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.224945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.225240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.225536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.225542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.225827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.226141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.226148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.226455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.226793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.226800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.227159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.227305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.227312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.227417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.227704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.227710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.227917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.228238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.228245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.228527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.228847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.228853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.229146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.229463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.229469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.229665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.229814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.229821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.230021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.230301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.230307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.230633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.230950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.230958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.231280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.231584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.231590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.231897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.232212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.232218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.232512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.232832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.232838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.233153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.233485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.233491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.233791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.234104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.234110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.234404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.234700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.234706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.234996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.235174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.235181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.235545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.235860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.235866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.236051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.236361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.236368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.236717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.237033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.237042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.377 qpair failed and we were unable to recover it. 00:32:46.377 [2024-06-11 12:26:59.237333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.377 [2024-06-11 12:26:59.237645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.237652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.237957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.238043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.238049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.238338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.238654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.238660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.238952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.239248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.239255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.239590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.239861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.239867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.240165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.240391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.240397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.240677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.240949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.240956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.241266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.241587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.241594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.241765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.242077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.242084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.242387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.242709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.242717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.243033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.243344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.243350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.243633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.243929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.243935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.244242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.244442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.244448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.244709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.244995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.245001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.245286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.245597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.245603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.245895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.246245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.246252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.246409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.246744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.246750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.247028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.247317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.247323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.247636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.247962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.247968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.248259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.248550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.248556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.248867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.249199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.249206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.249516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.249835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.249841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.250212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.250509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.250515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.250840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.251149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.251155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.251514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.251829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.251836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.252143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.252448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.252454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.252760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.253079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.253085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.253405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.253720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.253727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.378 qpair failed and we were unable to recover it. 00:32:46.378 [2024-06-11 12:26:59.254018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.378 [2024-06-11 12:26:59.254320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.254326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.254638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.254969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.254976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.255164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.255480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.255487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.255642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.255822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.255830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.256148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.256446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.256453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.256764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.257081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.257088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.257379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.257692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.257699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.258011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.258209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.258215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.258514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.258828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.258834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.259146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.259316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.259323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.259533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.259857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.259863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.260093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.260403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.260409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.260702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.260954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.260961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.261264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.261563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.261570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.261880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.262195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.262201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.262493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.262807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.262813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.263210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.263354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.263361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.263770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.264077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.264084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.264397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.264738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.264745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.265075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.265378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.265385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.265668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.265982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.265988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.266157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.266187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.266193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.266458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.266774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.266781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.267074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.267462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.267468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.379 qpair failed and we were unable to recover it. 00:32:46.379 [2024-06-11 12:26:59.267761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.379 [2024-06-11 12:26:59.268073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.268080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.268474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.268779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.268786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.269091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.269415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.269422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.269712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.270026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.270034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.270343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.270673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.270680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.270983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.271074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.271081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.271386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.271702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.271709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.272019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.272321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.272326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.272624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.272947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.272953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.273253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.273582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.273589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.273899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.274235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.274242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.274522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.274810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.274816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.275113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.275424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.275430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.275791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.276087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.276093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.276415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.276702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.276708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.277000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.277295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.277302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.277596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.277902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.277908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.278220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.278552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.278558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.278766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.279053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.279059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.279342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.279641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.279647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.279941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.280199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.280206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.280504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.280821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.280827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.281138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.281461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.281467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.281760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.282052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.282059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.282266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.282591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.282597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.282908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.283253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.283259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.283564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.283883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.283889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.284180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.284500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.284506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.380 [2024-06-11 12:26:59.284789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.285083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.380 [2024-06-11 12:26:59.285089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.380 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.285245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.285562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.285569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.285906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.286192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.286199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.286508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.286799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.286805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.287082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.287408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.287415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.287697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.288047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.288054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.288331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.288612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.288618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.288909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.289201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.289208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.289479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.289614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.289621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.289916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.290087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.290094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.290396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.290711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.290718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.291047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.291350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.291356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.291650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.291960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.291966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.292127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.292400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.292406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.292674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.292974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.292980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.293275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.293580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.293586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.293892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.294203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.294209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.294525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.294684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.294691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.295007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.295330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.295336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.295625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.295937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.295943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.296243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.296535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.296541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.296854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.297170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.297176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.297438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.297753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.297759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.298041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.298374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.298380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.298712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.298999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.299006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.299311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.299619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.299626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.299937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.300200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.300207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.300509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.300834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.300841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.381 [2024-06-11 12:26:59.301177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.301440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.381 [2024-06-11 12:26:59.301446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.381 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.301759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.302047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.302054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.302354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.302623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.302630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.302925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.303038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.303053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.303356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.303675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.303681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.303988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.304242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.304249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.304562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.304830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.304836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.305031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.305293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.305299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.305599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.305906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.305912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.306115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.306313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.306320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.306527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.306753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.306761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.306927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.307217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.307224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.307524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.307837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.307844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.308029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.308361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.308368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.308524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.308814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.308820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.309142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.309460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.309466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.309776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.310059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.310066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.310383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.310690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.310696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.310999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.311321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.311328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.311520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.311699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.311706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.312007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.312296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.312304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.312617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.312809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.312816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.313138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.313439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.313446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.313757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.314064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.314071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.314352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.314663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.314669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.314978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.315186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.315192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.315485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.315819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.315826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.316102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.316414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.316420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.316711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.316880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-06-11 12:26:59.316887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-06-11 12:26:59.317158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.317497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.317503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.317798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.318120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.318126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.318344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.318677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.318683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.318850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.319032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.319041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.319348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.319638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.319645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.319981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.320277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.320285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.320579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.320895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.320902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.321102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.321377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.321384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.321704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.321984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.321990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.322264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.322594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.322600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.322965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.323132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.323139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.323439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.323738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.323745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.324014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.324287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.324293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.324600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.324895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.324903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.325209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.325413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.325420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.325716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.326045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.326052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.326365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.326658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.326664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.326960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.327254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.327260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.327546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.327868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.327875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.328199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.328486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.328493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.328801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.329085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.329092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.329396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.329713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.329719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.330025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.330316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.330322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.330614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.330904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.330911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.331252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.331569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.331583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.331922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.332239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.332245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.332554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.332866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.332872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.333171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.333480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.333486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-06-11 12:26:59.333771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.334097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-06-11 12:26:59.334103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.334412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.334595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.334601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.334891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.335219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.335225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.335606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.335922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.335929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.336202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.336374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.336382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.336733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.336871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.336881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.337190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.337570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.337576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.337866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.338166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.338173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.338486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.338807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.338813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.339150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.339471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.339478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.339772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.339931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.339938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.340209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.340544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.340551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.340885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.341184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.341192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.341509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.341699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.341705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.342010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.342297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.342304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.342613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.342802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.342809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.343129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.343457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.343463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.343783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.344085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.344092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.344389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.344702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.344708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.344998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.345290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.345297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.345604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.345746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.345752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.346077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.346374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.346382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.346673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.346993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.346999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.347312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.347622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.347628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.347941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.348213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.348219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-06-11 12:26:59.348509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-06-11 12:26:59.348700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.348706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.349009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.349366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.349372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.349481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.349839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.349846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.350128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.350421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.350427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.350736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.350893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.350900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.351146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.351480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.351486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.351779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.352092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.352099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.352398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.352715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.352721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.353006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.353165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.353173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.353490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.353677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.353684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.354044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.354358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.354364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.354682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.354996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.355003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.355289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.355603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.355609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.355897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.356210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.356216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.356518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.356834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.356840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.357149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.357443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.357449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.357760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.358081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.358088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.358385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.358703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.358709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.359019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.359226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.359233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.359536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.359869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.359875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.360154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.360430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.360437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.360641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.360895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.360902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.361225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.361395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.361402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.361720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.362026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.362034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.362322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.362611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.362617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.362749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.363042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.363049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.363322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.363638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.363644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.363962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.364302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.364308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-06-11 12:26:59.364597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.364658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-06-11 12:26:59.364665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.364994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.365151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.365157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.365449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.365762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.365768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.366075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.366402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.366409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.366569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.366723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.366729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.366896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.367217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.367224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.367514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.367810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.367816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.368006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.368220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.368226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.368527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.368845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.368852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.369179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.369468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.369474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.369771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.370084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.370090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.370403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.370718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.370724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.370901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.371113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.371119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.371395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.371722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.371729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.372022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.372317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.372323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.372634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.372946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.372953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.373258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.373542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.373549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.373872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.374184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.374191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.374482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.374793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.374800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.375110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.375405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.375411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.375739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.376032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.376039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.376334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.376538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.376544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.376821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.376978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.376985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.377268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.377598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.377604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.377915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.378109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.378116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.378451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.378753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.378759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.379131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.379404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.379410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.379560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.379834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.379841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-06-11 12:26:59.380145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.380442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-06-11 12:26:59.380449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.380750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.380920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.380928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.381205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.381525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.381532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.381842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.382062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.382068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.382360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.382647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.382653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.382856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.383144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.383151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.383363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.383639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.383646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.383957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.384264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.384271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.384570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.384904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.384910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.385229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.385398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.385404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.385608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.385903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.385910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.386207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.386498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.386505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.386714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.387018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.387026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.387338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.387631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.387637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.387971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.388282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.388288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.388573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.388780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.388786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.389086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.389382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.389388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.389595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.389922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.389928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.390234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.390549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.390555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.390871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.391157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.391163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.391479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.391790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.391796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.392084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.392407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.392413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.392696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.392872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.392879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.393192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.393512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.393519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.393826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.394146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.394153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-06-11 12:26:59.394490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.394784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-06-11 12:26:59.394790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.657 [2024-06-11 12:26:59.395086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.657 [2024-06-11 12:26:59.395419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.657 [2024-06-11 12:26:59.395427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.657 qpair failed and we were unable to recover it. 00:32:46.657 [2024-06-11 12:26:59.395713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.657 [2024-06-11 12:26:59.396028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.657 [2024-06-11 12:26:59.396035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.657 qpair failed and we were unable to recover it. 00:32:46.657 [2024-06-11 12:26:59.396330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.657 [2024-06-11 12:26:59.396505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.657 [2024-06-11 12:26:59.396512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.396809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.397001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.397008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.397392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.397660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.397667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.397970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.398286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.398294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.398455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.398755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.398762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.398974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.399171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.399178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.399442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.399662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.399669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.399985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.400279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.400286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.400590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.400895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.400902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.401186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.401490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.401496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.401784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.402106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.402113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.402420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.402703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.402710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.403068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.403406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.403412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.403757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.403916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.403922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.404197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.404478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.404484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.404797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.404960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.404966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.405235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.405434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.405441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.405716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.406038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.406045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.406372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.406580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.406586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.406635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.406842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.406848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.407160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.407490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.407496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.407833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.408135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.408142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.408436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.408762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.408768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.409092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.409419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.409425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.409734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.410055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.410062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.410370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.410695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.410701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.410790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.411068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.411075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.411403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.411708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.411716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.658 qpair failed and we were unable to recover it. 00:32:46.658 [2024-06-11 12:26:59.412024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.412341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.658 [2024-06-11 12:26:59.412347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.412637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.412982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.412988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.413283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.413599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.413612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.413914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.414238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.414245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.414564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.414863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.414869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.415177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.415517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.415523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.415846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.416144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.416150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.416438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.416729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.416735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.417047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.417371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.417377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.417665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.417975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.417983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.418282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.418577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.418583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.418893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.419063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.419070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.419287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.419640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.419647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.419950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.420272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.420279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.420570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.420905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.420912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.421201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.421537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.421544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.421849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.422164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.422170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.422472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.422730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.422736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.423028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.423391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.423397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.423677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.423729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.423737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.424037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.424343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.424350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.424641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.424853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.424860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.425077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.425396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.425402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.425715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.426005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.426011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.426286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.426613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.426619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.426813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.426986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.426993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.427295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.427499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.427505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.427809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.428009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.428015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.428337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.428666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.428672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.659 qpair failed and we were unable to recover it. 00:32:46.659 [2024-06-11 12:26:59.428845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.659 [2024-06-11 12:26:59.429184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.429193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.429495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.429810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.429816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.430029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.430370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.430376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.430685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.431000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.431007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.431313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.431629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.431636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.431934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.432233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.432239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.432553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.432837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.432843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.433156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.433483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.433489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.433779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.433960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.433967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.434265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.434591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.434597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.434879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.435163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.435169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.435361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.435703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.435710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.435902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.436210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.436216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.436534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.436847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.436853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.437184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.437499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.437506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.437791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.438078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.438084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.438399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.438705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.438711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.439000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.439295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.439301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.439589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.439910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.439917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.440207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.440512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.440520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.440810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.441092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.441099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.441422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.441694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.441700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.441982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.442256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.442263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.442579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.442865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.442871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.443162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.443488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.443494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.443648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.443922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.443929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.444248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.444550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.444556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.444741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.444971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.444977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.445261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.445565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.445571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.660 qpair failed and we were unable to recover it. 00:32:46.660 [2024-06-11 12:26:59.445851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.660 [2024-06-11 12:26:59.446184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.446190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.446382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.446649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.446655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.446969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.447292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.447298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.447588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.447898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.447904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.448190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.448511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.448517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.448828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.449165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.449171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.449469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.449672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.449678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.449972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.450276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.450282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.450610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.450930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.450937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.451252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.451573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.451579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.451743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.451926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.451933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.452247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.452611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.452617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.452789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.452998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.453004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.453299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.453607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.453613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.453919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.454201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.454208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.454499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.454793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.454799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.454974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.455249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.455255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.455579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.455893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.455900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.456218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.456387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.456394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.456808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.457093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.457100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.457425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.457732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.457739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.458053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.458373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.458379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.458687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.458842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.458849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.459118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.459424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.459430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.459646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.459971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.459977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.460093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.460348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.460355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.460663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.460979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.460985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.461297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.461612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.461618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.461891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.462180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.661 [2024-06-11 12:26:59.462186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.661 qpair failed and we were unable to recover it. 00:32:46.661 [2024-06-11 12:26:59.462490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.462808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.462817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.463123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.463443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.463451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.463775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.464094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.464102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.464394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.464711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.464719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.465032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.465218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.465225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.465558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.465877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.465885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.466185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.466507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.466515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.466803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.467112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.467120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.467427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.467745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.467752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.468066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.468347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.468354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.468657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.469000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.469009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.469343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.469635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.469642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.469952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.470275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.470283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.470595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.470946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.470955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.471273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.471588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.471597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.471895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.472206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.472215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.472518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.472832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.472840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.473145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.473467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.473475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.473807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.474120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.474127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.474427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.474742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.474749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.475054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.475356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.475363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.475674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.475988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.475996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.476190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.476534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.476541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.662 qpair failed and we were unable to recover it. 00:32:46.662 [2024-06-11 12:26:59.476730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.476912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.662 [2024-06-11 12:26:59.476919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.477200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.477385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.477393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.477714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.478026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.478034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.478323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.478487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.478494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.478788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.479110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.479118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.479434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.479747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.479755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.480058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.480436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.480444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.480735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.481048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.481055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.481379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.481692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.481700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.482011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.482295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.482303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.482590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.482901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.482909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.483186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.483507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.483514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.483805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.484081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.484090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.484438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.484775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.484783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.485052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.485391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.485398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.485685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.486001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.486009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.486299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.486611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.486619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.486925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.487241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.487249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.487554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.487876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.487884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.488197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.488510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.488518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.488810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.489128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.489137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.489438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.489750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.489758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.490028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.490311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.490319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.490512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.490650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.490658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.490826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.490987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.490996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.491306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.491641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.491648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.491954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.492260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.492268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.492558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.492697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.492705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.493040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.493317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.493325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.663 qpair failed and we were unable to recover it. 00:32:46.663 [2024-06-11 12:26:59.493630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.663 [2024-06-11 12:26:59.493952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.493960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.494269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.494586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.494594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.494883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.495199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.495207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.495497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.495667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.495675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.495939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.496209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.496217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.496540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.496859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.496867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.497194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.497492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.497499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.497827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.498147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.498155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.498465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.498790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.498797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.499104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.499421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.499429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.499756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.499911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.499919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.500172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.500473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.500481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.500786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.501088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.501096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.501439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.501734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.501742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.502032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.502348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.502355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.502647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.502955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.502963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.503316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.503512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.503519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.503841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.504165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.504173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.504478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.504786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.504794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.505084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.505405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.505412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.505721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.506014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.506025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.506331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.506630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.506639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.506927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.507243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.507251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.507541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.507848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.507856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.508161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.508473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.508480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.508828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.509145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.509153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.509457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.509777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.509785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.510094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.510422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.510429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.510581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.510773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.510782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.664 qpair failed and we were unable to recover it. 00:32:46.664 [2024-06-11 12:26:59.511056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.664 [2024-06-11 12:26:59.511344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.511352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.511639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.511958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.511966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.512273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.512559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.512567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.512871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.513176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.513184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.513491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.513826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.513833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.514168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.514476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.514484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.514648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.514937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.514945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.515226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.515556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.515564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.515869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.516184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.516191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.516481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.516791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.516799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.517132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.517452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.517460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.517752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.518088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.518095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.518401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.518717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.518726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.519014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.519288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.519295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.519578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.519892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.519899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.520120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.520278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.520286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.520544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.520891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.520899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.521207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.521496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.521504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.521834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.522126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.522133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.522441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.522757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.522764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.523068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.523369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.523376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.523665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.523977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.523984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.524256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.524566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.524573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.524880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.525200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.525209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.525514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.525711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.525719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.525848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.526138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.526146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.526437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.526597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.526604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.526907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.527225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.527233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.527529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.527703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.665 [2024-06-11 12:26:59.527711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.665 qpair failed and we were unable to recover it. 00:32:46.665 [2024-06-11 12:26:59.528047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.528370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.528378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.528703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.529013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.529026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.529236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.529552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.529559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.529771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.530103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.530110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.530400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.530597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.530604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.530906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.531189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.531196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.531520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.531853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.531860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.532143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.532464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.532471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.532773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.532936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.532945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.533217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.533508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.533516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.533822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.534115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.534123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.534431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.534733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.534741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.535028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.535187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.535194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.535502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.535833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.535840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.536147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.536340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.536348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.536670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.536989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.536996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.537276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.537464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.537472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.537778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.537996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.538003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.538299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.538603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.538610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.538798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.539127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.539135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.539422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.539727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.539735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.540036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.540348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.540355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.540705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.540993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.541001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.541390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.541679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.541687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.542014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.542305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.542313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.542601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.542886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.542894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.543213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.543546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.543553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.666 qpair failed and we were unable to recover it. 00:32:46.666 [2024-06-11 12:26:59.543858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.544190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.666 [2024-06-11 12:26:59.544198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.544525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.544861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.544868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.545122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.545406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.545414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.545720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.546049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.546058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.546363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.546578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.546586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.546880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.547195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.547202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.547528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.547851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.547858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.548165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.548491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.548498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.548814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.549119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.549127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.549300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.549571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.549579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.549868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.550181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.550189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.550510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.550797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.550804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.551108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.551421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.551428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.551771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.552086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.552094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.552379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.552694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.552701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.553000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.553140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.553148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.553462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.553681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.553689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.554026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.554333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.554341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.554524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.554661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.554669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.554929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.555250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.555258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.555561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.555875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.555882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.556056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.556344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.556352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.556646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.556961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.556969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.557163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.557380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.557389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.557742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.558030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.558038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.558307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.558643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.558651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.558940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.559256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.667 [2024-06-11 12:26:59.559264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.667 qpair failed and we were unable to recover it. 00:32:46.667 [2024-06-11 12:26:59.559569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.559892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.559899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.560182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.560511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.560518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.560842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.561155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.561162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.561460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.561776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.561783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.562080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.562415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.562422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.562762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.563098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.563106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.563404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.563717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.563726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.564020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.564300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.564308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.564611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.564894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.564902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.565253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.565578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.565586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.565916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.566233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.566241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.566535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.566854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.566861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.567172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.567466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.567473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.567776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.568090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.568098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.568399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.568710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.568718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.569061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.569357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.569365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.569674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.569989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.569996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.570306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.570617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.570625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.570987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.571154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.571162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.571461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.571588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.571597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.571917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.572184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.572192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.572376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.572696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.572703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.572995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.573290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.573298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.573608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.573771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.573780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.574057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.574368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.574376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.574672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.574941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.574948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.575160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.575418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.575426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.575604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.575938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.575946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.668 [2024-06-11 12:26:59.576249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.576569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.668 [2024-06-11 12:26:59.576577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.668 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.576887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.577157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.577165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.577454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.577766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.577774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.578063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.578389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.578397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.578702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.579021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.579029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.579313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.579631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.579638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.579925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.580153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.580160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.580487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.580673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.580682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.580978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.581285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.581294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.581598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.581933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.581941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.582256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.582573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.582582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.582856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.583166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.583174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.583481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.583769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.583777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.584085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.584306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.584313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.584605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.584918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.584926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.585229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.585527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.585534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.585836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.586123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.586131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.586435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.586746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.586754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.587045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.587361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.587368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.587695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.588029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.588037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.588343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.588500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.588508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.588774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.589053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.589060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.589422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.589772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.589779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.590068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.590368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.590375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.590683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.590986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.590994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.591303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.591617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.591624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.591917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.592231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.592239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.592565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.592873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.592881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.593192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.593563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.593570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.669 [2024-06-11 12:26:59.593913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.594176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.669 [2024-06-11 12:26:59.594184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.669 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.594488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.594803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.594811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.595137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.595449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.595457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.595608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.595920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.595929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.596247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.596561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.596568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.596857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.597174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.597183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.597533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.597867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.597874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.598181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.598490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.598497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.598802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.598996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.599003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.599162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.599478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.599485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.599809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.600056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.600064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.600397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.600734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.600742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.601069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.601368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.601375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.601710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.601993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.602002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.602312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.602616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.602623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.602931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.603225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.603233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.603450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.603714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.603722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.603877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.604127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.604135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.604311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.604606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.604614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.604915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.605229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.605237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.605542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.605888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.605896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.606187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.606358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.606365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.606707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.606997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.607005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.607297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.607608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.607618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.607923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.608213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.608221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.608506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.608818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.608826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.609138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.609442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.609449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.609755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.609948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.609955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.610261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.610588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.610596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.670 [2024-06-11 12:26:59.610883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.610936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.670 [2024-06-11 12:26:59.610944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.670 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.611240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.611560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.611568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.611872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.612036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.612043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.612328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.612632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.612640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.612965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.613280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.613289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.613622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.613786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.613794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.614099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.614252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.614259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.614528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.614844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.614851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.615181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.615485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.615493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.615680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.616001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.616008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.616303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.616620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.616628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.616973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.617283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.617291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.617589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.617902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.617909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.618186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.618484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.618491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.618664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.618955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.618962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.619271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.619605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.619612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.619939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.620240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.620248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.620535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.620847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.620854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.621162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.621490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.621498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.621801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.622086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.622094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.622397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.622737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.622745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.623078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.623419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.623427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.623739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.624058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.624066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.624381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.624704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.624712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.624901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.625087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.625094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.671 [2024-06-11 12:26:59.625424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.625746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.671 [2024-06-11 12:26:59.625754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.671 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.626026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.626339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.626347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.626656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.626972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.626980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.627269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.627579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.627586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.627873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.628186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.628194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.628534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.628704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.628711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.629060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.629147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.629155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.629347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.629645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.629653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.629939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.630124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.630132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.630441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.630745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.630752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.631066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.631332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.631339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.631664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.631975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.631983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.632295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.632610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.632618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.632928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.633237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.633244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.633438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.633752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.633759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.633939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.634227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.634235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.634537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.634853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.634860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.635169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.635501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.635509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.635814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.636129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.636137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.636446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.636757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.636764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.637058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.637286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.637294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.637607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.637923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.637931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.638250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.638567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.638574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.638902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.639219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.639226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.639403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.639712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.639721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.640037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.640329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.640337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.640646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.640955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.640962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.641250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.641594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.641601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.641886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.642143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.642151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.672 qpair failed and we were unable to recover it. 00:32:46.672 [2024-06-11 12:26:59.642455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.642746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.672 [2024-06-11 12:26:59.642753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.643064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.643387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.643394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.643681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.643748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.643755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.643927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.644252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.644259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.644564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.644879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.644886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.645192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.645555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.645563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.645733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.646052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.646060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.646375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.646690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.646698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.646867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.647191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.647198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.647510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.647820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.647827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.648137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.648478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.648485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.648758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.648952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.648959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.649230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.649542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.649550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.649849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.650167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.650175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.650465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.650710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.650717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.651031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.651356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.651364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.651686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.652003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.652012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.652320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.652636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.652644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.652944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.653260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.653268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.653564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.653873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.653880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.654128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.654436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.654444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.654749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.655078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.655087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.655430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.655744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.655752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.656046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.656311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.656318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.656603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.656889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.656897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.657216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.657553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.657561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.657883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.658276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.658283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.658581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.658896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.658903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.659236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.659532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.659540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.673 qpair failed and we were unable to recover it. 00:32:46.673 [2024-06-11 12:26:59.659709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.660015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.673 [2024-06-11 12:26:59.660033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.660363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.660675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.660682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.660968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.661167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.661175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.661488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.661824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.661832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.662131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.662466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.662474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.662823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.663137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.663145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.663450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.663733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.663740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.664049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.664372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.664379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.664688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.665001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.665009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.665307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.665635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.665642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.665966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.666287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.666295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.666677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.666963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.666972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.667328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.667619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.667627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.667952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.668266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.668273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.668575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.668890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.668899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.669228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.669413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.669421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.669713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.670029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.670038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.670327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.670637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.670645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.670803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.671147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.671155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.671458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.671803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.671811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.671981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.672368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.672376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.672663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.672820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.672828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.673121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.673415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.673423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.673626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.673812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.673820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.674139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.674472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.674480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.674807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.675096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.675104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.675438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.675736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.675743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.676060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.676385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.676392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.676568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.676899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.676907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.674 qpair failed and we were unable to recover it. 00:32:46.674 [2024-06-11 12:26:59.677216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.674 [2024-06-11 12:26:59.677538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.677545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.677841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.678033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.678041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.678345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.678657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.678664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.678979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.679270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.679279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.679565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.679882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.679889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.680186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.680512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.680520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.680838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.681155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.681163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.681479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.681792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.681800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.682080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.682242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.682250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.682424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.682760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.675 [2024-06-11 12:26:59.682768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.675 qpair failed and we were unable to recover it. 00:32:46.675 [2024-06-11 12:26:59.683072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.683398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.683408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.683582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.683914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.683921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.684232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.684543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.684551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.684719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.684882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.684890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.685078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.685404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.685412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.685624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.685797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.685805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.686101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.686488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.686495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.686820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.687129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.687137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.687413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.687689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.687697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.688046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.688430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.688438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.688795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.688849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.688857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.689144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.689473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.689482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.945 qpair failed and we were unable to recover it. 00:32:46.945 [2024-06-11 12:26:59.689779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.945 [2024-06-11 12:26:59.690090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.690098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.690414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.690748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.690757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.691062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.691233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.691241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.691538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.691853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.691860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.692169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.692355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.692363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.692667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.693003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.693011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.693338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.693610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.693618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.693936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.694140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.694149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.694350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.694667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.694675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.694979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.695283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.695291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.695622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.695959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.695967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.696300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.696621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.696630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.696931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.697226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.697234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.697408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.697711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.697719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.698046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.698268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.698276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.698576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.698897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.698905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.699230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.699511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.699520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.699826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.700139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.700148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.700452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.700800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.700808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.700976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.701280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.701288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.701601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.701937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.701945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.702252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.702588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.702598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.702923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.703221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.703229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.703548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.703859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.703867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.704174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.704435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.704451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.704659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.704972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.704981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.705307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.705613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.705620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.705805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.706073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.706081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.706452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.706784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.946 [2024-06-11 12:26:59.706792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.946 qpair failed and we were unable to recover it. 00:32:46.946 [2024-06-11 12:26:59.707097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.707386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.707393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.707683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.707996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.708005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.708332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.708660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.708669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.708915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.709218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.709226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.709530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.709840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.709848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.710160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.710481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.710488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.710647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.710939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.710947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.711262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.711574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.711580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.711893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.712220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.712228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.712570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.712813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.712820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.713146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.713331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.713338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.713634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.713886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.713894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.714191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.714529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.714537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.714862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.715190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.715198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.715483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.715760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.715767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.716073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.716418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.716425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.716729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.717076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.717084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.717328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.717641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.717649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.717943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.718131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.718139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.718444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.718782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.718790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.719114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.719405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.719413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.719734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.720092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.720100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.720401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.720730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.720737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.720918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.721130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.721138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.721438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.721773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.721781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.722107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.722412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.722420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.722715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.723028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.723036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.723243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.723431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.723438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.947 [2024-06-11 12:26:59.723741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.724029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.947 [2024-06-11 12:26:59.724037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.947 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.724324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.724635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.724642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.724923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.725239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.725247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.725548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.725834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.725841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.726156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.726340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.726348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.726643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.726955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.726963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.727264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.727574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.727583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.727896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.728203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.728211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.728559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.728723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.728731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.729062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.729349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.729357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.729507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.729849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.729857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.730164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.730488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.730496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.730801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.731081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.731089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.731501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.731749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.731756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.732083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.732396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.732403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.732717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.733043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.733052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.733376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.733533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.733541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.733833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.734168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.734176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.734490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.734829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.734837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.735141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.735466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.735474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.735778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.736095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.736103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.736429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.736763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.736772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.736965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.737228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.737236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.737539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.737858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.737867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.738176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.738455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.738463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.738748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.739064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.739072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.739381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.739695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.739703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.740000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.740282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.740290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.740598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.740938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.740946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.948 qpair failed and we were unable to recover it. 00:32:46.948 [2024-06-11 12:26:59.741237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.741495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.948 [2024-06-11 12:26:59.741503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.741808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.741943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.741951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.742246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.742571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.742579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.742881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.743196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.743204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.743506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.743817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.743824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.744117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.744429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.744437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.744752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.745044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.745051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.745340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.745699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.745707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.746006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.746305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.746313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.746601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.746917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.746926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.747236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.747571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.747580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.747882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.748197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.748205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.748492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.748812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.748820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.749141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.749441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.749448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.749624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.749915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.749924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.750252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.750579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.750587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.750908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.751223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.751231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.751519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.751809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.751818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.752129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.752445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.752452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.752755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.753076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.753083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.753404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.753742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.753750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.754076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.754278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.754285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.754606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.754921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.754929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.755225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.755380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.755388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.755543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.755686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.755693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.755996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.756311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.756319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.756627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.756917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.756924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.757255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.757546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.757553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.757880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.758196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.758204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.949 qpair failed and we were unable to recover it. 00:32:46.949 [2024-06-11 12:26:59.758566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.949 [2024-06-11 12:26:59.758849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.758857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.759244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.759535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.759543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.759847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.760158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.760166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.760456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.760775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.760782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.761131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.761419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.761427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.761740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.762077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.762084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.762257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.762437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.762444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.762608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.762906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.762913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.763243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.763577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.763584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.763889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.764205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.764213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.764521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.764842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.764849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.765144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.765482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.765490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.765814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.766151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.766158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.766461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.766776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.766784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.767085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.767402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.767410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.767696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.768028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.768036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.768358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.768673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.768681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.768988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.769302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.769310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.769613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.769935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.769943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.770250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.770563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.770571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.770856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.771168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.771177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.771479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.771823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.771830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.772134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.772447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.772455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.772741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.773062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.950 [2024-06-11 12:26:59.773070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.950 qpair failed and we were unable to recover it. 00:32:46.950 [2024-06-11 12:26:59.773356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.773708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.773715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.773875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.774053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.774061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.774375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.774687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.774694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.774974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.775163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.775171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.775481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.775758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.775765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.775966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.776261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.776269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.776576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.776910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.776917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.777239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.777523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.777531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.777852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.778200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.778208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.778516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.778673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.778682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.778944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.779279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.779286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.779578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.779889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.779897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.780221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.780537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.780544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.780923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.781240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.781249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.781550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.781762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.781770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.782059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.782353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.782361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.782653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.782965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.782972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.783283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.783616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.783623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.783928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.784245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.784253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.784556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.784870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.784877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.785184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.785497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.785505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.785816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.786134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.786142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.786447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.786758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.786766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.787050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.787365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.787374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.787672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.787997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.788005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.788290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.788596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.788603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.788917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.789239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.789247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.789439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.789657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.789664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.951 [2024-06-11 12:26:59.789987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.790321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.951 [2024-06-11 12:26:59.790329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.951 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.790637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.790942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.790949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.791218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.791527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.791535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.791820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.792108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.792116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.792440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.792759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.792767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.792979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.793305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.793314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.793504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.793671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.793679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.793979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.794316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.794324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.794499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.794802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.794810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.795120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.795437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.795445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.795756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.796071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.796078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.796366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.796706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.796714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.796919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.797235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.797242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.797567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.797856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.797864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.798252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.798569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.798576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.798870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.799186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.799196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.799482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.799645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.799653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.799963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.800268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.800276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.800580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.800747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.800755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.801090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.801434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.801442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.801771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.802102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.802109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.802275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.802582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.802590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.802886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.803054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.803062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.803355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.803670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.803678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.803994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.804295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.804303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.804611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.804913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.804921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.804995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.805291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.805300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.805597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.805912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.805922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.806112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.806427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.806435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.952 [2024-06-11 12:26:59.806745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.806930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.952 [2024-06-11 12:26:59.806939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.952 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.807163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.807384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.807392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.807676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.807945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.807954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.808293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.808583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.808592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.808758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.808938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.808947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.809258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.809570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.809579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.809866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.810190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.810199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.810510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.810704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.810712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.811047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.811359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.811367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.811670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.811984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.811992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.812280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.812593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.812601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.812890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.813213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.813221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.813530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.813842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.813850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.814143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.814447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.814454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.814741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.815052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.815060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.815349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.815674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.815681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.815858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.816170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.816178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.816498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.816848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.816856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.817179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.817512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.817520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.817839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.818015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.818026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.818347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.818601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.818608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.818914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.819207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.819215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.819500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.819775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.819782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.820067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.820441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.820448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.820695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.821019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.821026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.821295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.821584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.821592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.821801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.821912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.821920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.822237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.822563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.822571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.822866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.823215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.823222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.953 qpair failed and we were unable to recover it. 00:32:46.953 [2024-06-11 12:26:59.823519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.823810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.953 [2024-06-11 12:26:59.823817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.823982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.824252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.824261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.824552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.824874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.824882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.825096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.825357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.825365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.825668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.825960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.825968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.826281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.826474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.826481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.826810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.827134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.827142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.827459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.827788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.827795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.828101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.828438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.828446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.828722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.829011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.829020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.829217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.829481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.829489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.829791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.830083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.830091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.830392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.830704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.830711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.831001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.831327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.831335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.831627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.831941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.831949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.832254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.832573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.832581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.832887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.833167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.833175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.833374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.833648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.833656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.833961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.834235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.834243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.834516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.834827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.834835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.835037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.835310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.835319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.835625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.835924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.835931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.836221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.836544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.836552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.836862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.837179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.837186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.837486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.837806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.837814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.954 qpair failed and we were unable to recover it. 00:32:46.954 [2024-06-11 12:26:59.838141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.838478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.954 [2024-06-11 12:26:59.838485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.838810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.839129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.839138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.839445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.839756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.839764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.840071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.840243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.840252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.840546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.840708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.840716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.841025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.841344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.841352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.841565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.841748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.841756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.841913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.842184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.842192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.842476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.842791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.842798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.842957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.843286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.843294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.843595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.843929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.843937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.844265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.844583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.844591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.844878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.845198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.845205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.845506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.845817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.845826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.846130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.846331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.846338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.846642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.846957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.846966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.847272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.847565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.847574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.847779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.848064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.848072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.848373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.848648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.848656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.848972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.849276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.849284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.849570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.849860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.849868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.850200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.850536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.850544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.850847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.851177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.851185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.851354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.851690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.851698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.851986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.852299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.852306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.852616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.852932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.852939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.853243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.853538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.853546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.853894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.854181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.854189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.854513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.854829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.854837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.955 qpair failed and we were unable to recover it. 00:32:46.955 [2024-06-11 12:26:59.855123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.955 [2024-06-11 12:26:59.855442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.855449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.855738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.856048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.856056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.856212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.856467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.856474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.856752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.857064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.857071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.857367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.857678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.857686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.857970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.858128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.858137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.858532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.858787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.858795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.859116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.859433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.859440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.859727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.860041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.860049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.860247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.860557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.860565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.860875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.861265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.861272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.861601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.861937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.861945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.862241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.862554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.862563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.862752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.863069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.863077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.863401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.863734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.863741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.864029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.864352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.864360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.864685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.865021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.865029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.865231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.865445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.865452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.865756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.866091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.866099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.866318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.866614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.866621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.866961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.867171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.867179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.867505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.867823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.867831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.868144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.868455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.868463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.868779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.869086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.869094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.869423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.869639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.869646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.869991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.870328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.870336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.870635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.870950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.870959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.871253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.871567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.871576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.871925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.872262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.872270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.956 qpair failed and we were unable to recover it. 00:32:46.956 [2024-06-11 12:26:59.872540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.956 [2024-06-11 12:26:59.872808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.872816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.873009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.873312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.873320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.873507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.873854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.873862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.874190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.874508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.874516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.874822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.875153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.875161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.875466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.875782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.875791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.876162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.876467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.876475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.876807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.877087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.877095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.877403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.877719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.877726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.878072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.878400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.878408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.878696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.879030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.879038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.879224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.879554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.879562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.879724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.880001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.880009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.880288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.880603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.880611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.880914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.881228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.881236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.881454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.881744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.881753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.882059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.882393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.882401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.882709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.883030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.883037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.883322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.883488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.883496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.883681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.883964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.883971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.884276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.884608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.884615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.884921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.885233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.885241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.885529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.885852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.885860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.886070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.886377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.886385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.886711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.887024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.887032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.887308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.887630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.887640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.887932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.888218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.888226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.888517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.888830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.888837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.888906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.889245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.889253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.957 qpair failed and we were unable to recover it. 00:32:46.957 [2024-06-11 12:26:59.889559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.957 [2024-06-11 12:26:59.889874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.889881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.890172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.890505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.890512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.890820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.891133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.891141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.891447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.891760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.891768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.892074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.892397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.892405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.892600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.892872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.892880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.893189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.893532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.893541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.893857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.894147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.894155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.894351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.894617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.894624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.894903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.895192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.895200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.895524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.895687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.895696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.895884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.896153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.896161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.896483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.896761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.896768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.897058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.897392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.897400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.897681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.897997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.898004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.898319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.898653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.898660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.898985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.899312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.899319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.899501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.899728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.899735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.900023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.900304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.900312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.900619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.900935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.900943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.901257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.901552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.901560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.901875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.902184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.902191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.902500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.902811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.902819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.903121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.903444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.903451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.903758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.904077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.904085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.904385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.904673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.904681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.904975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.905303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.905311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.905661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.905999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.906006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.906321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.906542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.958 [2024-06-11 12:26:59.906549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.958 qpair failed and we were unable to recover it. 00:32:46.958 [2024-06-11 12:26:59.906848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.907168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.907177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.907480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.907798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.907806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.908009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.908282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.908291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.908595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.908907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.908916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.909086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.909416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.909425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.909719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.910026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.910035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.910309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.910476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.910486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.910816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.911022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.911035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.911274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.911608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.911617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.911905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.912228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.912237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.912546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.912732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.912741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.913060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.913409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.913417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.913604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.913806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.913814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.913978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.914272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.914281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.914590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.914868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.914876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.915052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.915331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.915340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.915641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.915799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.915807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.915990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.916181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.916190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.916517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.916763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.916771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.917079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.917369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.917378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.917533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.917715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.917724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.918022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.918304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.918312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.918619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.918925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.918933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.919246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.919555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.919563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.919889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.920182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.959 [2024-06-11 12:26:59.920190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.959 qpair failed and we were unable to recover it. 00:32:46.959 [2024-06-11 12:26:59.920509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.920844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.920853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.921160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.921333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.921340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.921647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.921982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.921991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.922294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.922605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.922613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.922905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.923237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.923244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.923549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.923882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.923891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.924192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.924421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.924430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.924746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.925105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.925113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.925179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.925388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.925396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.925679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.926049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.926057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.926356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.926629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.926637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.926942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.927282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.927290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.927575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.927891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.927898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.928230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.928543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.928552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.928863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.929180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.929188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.929505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.929819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.929827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.930149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.930472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.930480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.930775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.931083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.931090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.931418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.931667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.931675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.931940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.932207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.932216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.932502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.932829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.932838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.933226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.933515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.933523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.933828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.934149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.934157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.934478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.934791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.934798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.935136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.935418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.935427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.935751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.936036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.936044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.936318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.936630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.936639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.936925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.937098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.937106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.960 [2024-06-11 12:26:59.937290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.937469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.960 [2024-06-11 12:26:59.937477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.960 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.937762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.937919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.937928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.938206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.938504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.938512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.938687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.938964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.938972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.939297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.939372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.939381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.939657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.939960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.939968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.940276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.940439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.940448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.940746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.941058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.941066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.941387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.941700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.941708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.942012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.942301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.942310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.942613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.942950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.942958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.943258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.943570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.943578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.943865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.944185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.944192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.944507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.944818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.944825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.945133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.945470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.945477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.945765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.946077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.946086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.946304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.946524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.946532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.946836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.947156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.947164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.947473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.947783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.947791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.948081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.948406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.948414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.948730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.949068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.949076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.949369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.949681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.949690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.949995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.950311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.950319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.950475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.950765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.950772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.950963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.951288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.951297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.951601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.951946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.951955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.952262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.952579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.952587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.952882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.953201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.953208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.953542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.953706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.953712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.961 qpair failed and we were unable to recover it. 00:32:46.961 [2024-06-11 12:26:59.953993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.961 [2024-06-11 12:26:59.954323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.954332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.954627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.954940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.954948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.955297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.955489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.955496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.955798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.956119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.956127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.956432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.956744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.956752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.957059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.957387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.957395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.957681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.957993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.958001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.958291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.958483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.958490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.958820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.959134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.959142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.959456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.959768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.959776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.960064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.960396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.960404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.960732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.961042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.961051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.961369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.961682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.961690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.961995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.962279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.962287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.962576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.962733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.962741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.963043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.963426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.963434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.963740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.964055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.964063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.964232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.964538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.964545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.964861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.965126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.965133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.965443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.965755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.965763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.966070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.966146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.966155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.966452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.966740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.966748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.967073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.967390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.967398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.967685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.967997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.968005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.968180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.968342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.968349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.968659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.968948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.968956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.969259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.969579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.962 [2024-06-11 12:26:59.969589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:46.962 qpair failed and we were unable to recover it. 00:32:46.962 [2024-06-11 12:26:59.969913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.970063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.970073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.970399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.970747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.970755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.971053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.971390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.971398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.971692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.972009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.972022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.972299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.972612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.972619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.972792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.973082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.973091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.973391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.973707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.973714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.974003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.974290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.974298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.974588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.974906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.974914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.975195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.975535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.975544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.975849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.976021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.976030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.976364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.976673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.976681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.976978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.977276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.229 [2024-06-11 12:26:59.977284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.229 qpair failed and we were unable to recover it. 00:32:47.229 [2024-06-11 12:26:59.977588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.977903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.977911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.978191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.978509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.978517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.978840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.979151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.979159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.979439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.979757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.979764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.980079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.980236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.980244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.980552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.980884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.980891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.981191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.981525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.981533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.981861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.982195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.982203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.982509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.982822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.982831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.983138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.983335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.983343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.983639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.983953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.983961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.984287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.984600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.984608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.984914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.985197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.985205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.985512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.985669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.985677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.986015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.986182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.986189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.986466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.986783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.986791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.986966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.987317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.987326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.987632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.987908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.987916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.988235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.988553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.988561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.988866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.989179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.989187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.989495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.989790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.989797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.990103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.990333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.990341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.990672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.991005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.991012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.991346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.991640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.991647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.991916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.992192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.992200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.992513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.992825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.992832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.993129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.993317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.993325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.230 qpair failed and we were unable to recover it. 00:32:47.230 [2024-06-11 12:26:59.993661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.993982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.230 [2024-06-11 12:26:59.993991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.994298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.994614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.994623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.994942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.995282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.995290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.995471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.995665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.995672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.995913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.996214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.996222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.996533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.996850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.996858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.997180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.997503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.997510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.997800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.997964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.997972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.998267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.998621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.998629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.998940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.999257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.999265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:26:59.999536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.999870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:26:59.999877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.000186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.000505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.000512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.000805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.001095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.001103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.001403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.001576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.001584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.001883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.002146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.002154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.002386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.003081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.003092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.003520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.003785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.003793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.004169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.004514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.004521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.004707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.004926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.004934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.005300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.005490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.005498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.005793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.006060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.006068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.006296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.006637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.006644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.006954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.007249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.007257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.007419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.007765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.007773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.008103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.008424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.008432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.008737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.008934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.008942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.009157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.009358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.009365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.009684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.010006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.010014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.010211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.010405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.231 [2024-06-11 12:27:00.010413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.231 qpair failed and we were unable to recover it. 00:32:47.231 [2024-06-11 12:27:00.010634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.010839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.010847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.011042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.011131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.011139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.011466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.011690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.011697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.011913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.012134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.012142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.012223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.012343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.012351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.012617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.012965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.012974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.013168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.013252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.013258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.013576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.013677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.013685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.014029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.014128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.014137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.014334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.014423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.014430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.014625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.014818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.014826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.015207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.015501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.015509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.015729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.016029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.016039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.016358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.016691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.016699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.017010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.017334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.017342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.017658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.018006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.018014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.018345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.018665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.018673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.018931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.019100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.019108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.019288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.019580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.019588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.019909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.020214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.020224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.020478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.020664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.020673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.020987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.021282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.021290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.021654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.021796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.021803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.022024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.022334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.022342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.022652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.022981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.022989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.023232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.023528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.023536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.023830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.024145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.024153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.024492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.024778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.024786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.232 qpair failed and we were unable to recover it. 00:32:47.232 [2024-06-11 12:27:00.025087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.025300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.232 [2024-06-11 12:27:00.025307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.025618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.025959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.025967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.026266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.026587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.026594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.026886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.027190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.027198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.027530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.027873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.027881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.028223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.028397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.028405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.028708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.029036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.029043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.029353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.029689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.029696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.029988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.030286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.030294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.030602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.030916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.030924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.031100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.031294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.031301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.031603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.031776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.031783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.031981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.032259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.032266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.032575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.032897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.032905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.033197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.033532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.033540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.033875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.034165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.034172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.034491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.034783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.034791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.035099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.035416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.035424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.035723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.036041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.036049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.036332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.036638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.036646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.036955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.037272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.037280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.037588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.037913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.037921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.038122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.038399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.038407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.038737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.038937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.038945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.039269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.039449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.039456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.039755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.040040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.040048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.040386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.040679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.040687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.040989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.041319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.041327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.233 [2024-06-11 12:27:00.041592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.041712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.233 [2024-06-11 12:27:00.041721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.233 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.041916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.042160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.042168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.042512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.042803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.042811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.042963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.043250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.043258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.043431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.043743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.043751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.044061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.044390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.044398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.044715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.045010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.045022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.045362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.045679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.045687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.045999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.046288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.046296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.046604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.046899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.046907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.047275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.047567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.047575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.047883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.048128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.048136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.048440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.048640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.048647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.048960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.049281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.049288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.049511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.049770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.049777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.050014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.050226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.050235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.050458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.050820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.050829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.051027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.051125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.051133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.051352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.051694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.051702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.051791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.051903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.051911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.052006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.052269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.052278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.052484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.052541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.052548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.052803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.052876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.052884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.234 qpair failed and we were unable to recover it. 00:32:47.234 [2024-06-11 12:27:00.053241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.053553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.234 [2024-06-11 12:27:00.053562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.053740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.054062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.054070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.054373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.054679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.054689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.055030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.055347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.055355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.055684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.055883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.055890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.056242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.056576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.056584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.056894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.057209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.057218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.057551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.057726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.057733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.057910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.058109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.058116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.058470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.058737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.058745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.059090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.059176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.059182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.059497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.059670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.059679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.060019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.060366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.060376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.060571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.060765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.060773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.061090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.061399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.061406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.061734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.061979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.061987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.062178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.062473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.062481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.062791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.062962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.062970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.063332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.063663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.063671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.063983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.064278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.064286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.064600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.064794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.064802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.065132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.065206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.065213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.065522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.065838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.065847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.066143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.066353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.066360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.066534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.066867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.066874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.067067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.067279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.067287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.067578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.067913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.067921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.068066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.068504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.068512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.235 [2024-06-11 12:27:00.068817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.069129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.235 [2024-06-11 12:27:00.069137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.235 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.069376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.069721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.069729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.070043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.070357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.070365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.070736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.071001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.071009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.071300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.071634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.071644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.071937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.072235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.072243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.072544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.072836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.072844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.073147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.073463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.073471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.073781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.074084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.074092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.074374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.074691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.074698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.075007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.075307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.075315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.075636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.075845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.075852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.076146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.076329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.076337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.076642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.076946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.076954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.077266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.077578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.077585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.077899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.078074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.078082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.078346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.078504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.078513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.078832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.079177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.079185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.079501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.079818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.079826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.080090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.080406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.080414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.080708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.080892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.080899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.081195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.081532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.081540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.081845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.082171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.082179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.082357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.082617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.082625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.082925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.083234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.083242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.083548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.083867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.083875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.084183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.084479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.084487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.084663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.084950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.084959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.085265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.085460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.085468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.236 qpair failed and we were unable to recover it. 00:32:47.236 [2024-06-11 12:27:00.085632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.236 [2024-06-11 12:27:00.085815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.085824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.086032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.086328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.086337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.087252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.087604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.087613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.087789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.088094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.088103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.088412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.088750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.088757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.089072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.089256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.089264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.089573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.089775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.089782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.090137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.090444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.090453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.090755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.090937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.090944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.091222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.091552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.091560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.091869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.092172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.092180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.092495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.092840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.092848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.093159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.093475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.093482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.093860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.094198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.094206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.094494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.094811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.094819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.095119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.095341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.095349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.095662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.095985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.095993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.096288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.096513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.096520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.096822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.097197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.097205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.097521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.097782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.097790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.098082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.098299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.098307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.098583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.098896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.098904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.099199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.099489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.099498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.099814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.100108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.100116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.100436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.100792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.100800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.101059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.101324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.101332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.101596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.101896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.101903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.102200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.102517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.102525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.102829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.103109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.237 [2024-06-11 12:27:00.103116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.237 qpair failed and we were unable to recover it. 00:32:47.237 [2024-06-11 12:27:00.103447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.103732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.103740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.104047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.104180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.104188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.104505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.104821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.104828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.105130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.105334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.105342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.105661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.105937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.105946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.106177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.106535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.106543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.106831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.107150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.107157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.107485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.107657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.107665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.107969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.108285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.108293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.108589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.108899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.108906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.109060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.109275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.109283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.109607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.109822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.109829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.110031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.110361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.110369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.110542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.110748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.110756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.111084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.111416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.111423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.111586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.111896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.111904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.112214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.112555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.112563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.112889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.113183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.113191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.113509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.113766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.113773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.113935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.114102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.114110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.114424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.114763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.114771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.114969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.115258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.115266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.115601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.115919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.115928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.116117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.116349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.116357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.238 qpair failed and we were unable to recover it. 00:32:47.238 [2024-06-11 12:27:00.116635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.238 [2024-06-11 12:27:00.116955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.116962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.117250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.117586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.117594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.117896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.118310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.118324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.118720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.119014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.119027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.119330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.119648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.119657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.119897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.120062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.120072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.120385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.120558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.120566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.120843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.121154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.121162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.121470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.121785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.121792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.122082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.122409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.122416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.122726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.122949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.122957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.123131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.123394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.123402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.123700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.123993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.124001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.124211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.124465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.124472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.124798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.124964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.124972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.125277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.125609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.125618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.125769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.126058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.126068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.126372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.126683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.126691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.127002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.127358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.127365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.127545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.127860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.127868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.128186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.128516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.128524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.128829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.129163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.129171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.129496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.129798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.129806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.130126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.130459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.130467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.130636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.130946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.130955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.131277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.131610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.131619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.131893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.132098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.132106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.132390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.132577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.132585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.239 qpair failed and we were unable to recover it. 00:32:47.239 [2024-06-11 12:27:00.132771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.239 [2024-06-11 12:27:00.133074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.133082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.133397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.133741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.133748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.134046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.134246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.134254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.134625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.134815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.134823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.135001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.135363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.135372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.135677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.135875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.135882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.136088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.136280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.136289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.136476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.136789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.136796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.137091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.137352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.137359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.137651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.137798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.137806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.138083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.138323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.138331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.138507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.138828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.138835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.139118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.139321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.139329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.139510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.139601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.139608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.139797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.139843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.139850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.140072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.140251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.140258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.140563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.140856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.140865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.141037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.141287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.141295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.141603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.141907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.141914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.142123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.142494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.142501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.142672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.143021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.143031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.143333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.143560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.143569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.143739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.143993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.144001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.144333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.144523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.144530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.144853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.145082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.145090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.145422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.145743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.145753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.146034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.146266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.146273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.146570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.146859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.146866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.240 [2024-06-11 12:27:00.147177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.147438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.240 [2024-06-11 12:27:00.147446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.240 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.147708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.147964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.147971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.148146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.148454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.148462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.148763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.149085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.149094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.149277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.149568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.149575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.149862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.150127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.150134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.150472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.150640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.150647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.150850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.151159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.151172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.151365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.151669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.151677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.151971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.152271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.152279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.152568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.152884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.152892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.153165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.153477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.153485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.153787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.154043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.154051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.154400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.154656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.154664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.154946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.155230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.155238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.155436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.155764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.155771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.156076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.156358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.156365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.156666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.156876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.156884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.157195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.157515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.157522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.157864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.158142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.158150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.158407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.158686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.158694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.158897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.159298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.159307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.159589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.159925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.159933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.160183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.160474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.160482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.160792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.161086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.161095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.161276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.161552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.161560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.161604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.161901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.161910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.162129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.162467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.162477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.162841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.163149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.241 [2024-06-11 12:27:00.163157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.241 qpair failed and we were unable to recover it. 00:32:47.241 [2024-06-11 12:27:00.163490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.163807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.163815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.164103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.164411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.164419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.164572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.164744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.164751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.165085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.165261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.165268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.165616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.165932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.165940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.166269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.166594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.166603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.166873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.167082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.167091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.167285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.167578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.167586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.167867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.168078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.168086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.168402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.168700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.168708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.168891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.169164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.169172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.169426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.169620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.169628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.169920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.170229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.170237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.170578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.170852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.170860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.171176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.171453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.171461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.171760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.172050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.172058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.172363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.172670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.172678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.172986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.173223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.173232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.173508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.173802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.173809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.174077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.174307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.174315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.174487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.174850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.174858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.175156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.175509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.175517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.175819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.176004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.176012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.176277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.176565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.176573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.176870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.177164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.177172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.177483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.177748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.177756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.178069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.178174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.178182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.178404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.178668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.178676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.242 qpair failed and we were unable to recover it. 00:32:47.242 [2024-06-11 12:27:00.179004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.242 [2024-06-11 12:27:00.179220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.179228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.179393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.179578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.179586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.179864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.180042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.180050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.180363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.180512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.180520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.180820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.181113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.181121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.181426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.181742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.181750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.182078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.182323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.182330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.182616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.182958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.182966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.183280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.183481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.183488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.183794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.183961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.183969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.184319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.184617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.184625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.184920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.185233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.185241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.185424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.185694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.185701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.186019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.186306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.186314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.186512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.186829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.186836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.187120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.187287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.187294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.187577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.187907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.187914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.188100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.188289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.188297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.188599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.188891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.188898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.189246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.189522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.189529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.189839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.190015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.190025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.190319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.190512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.190520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.190840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.191134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.191142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.191458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.191772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.191780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.243 [2024-06-11 12:27:00.192066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.192383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.243 [2024-06-11 12:27:00.192392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.243 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.192700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.192995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.193002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.193323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.193649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.193657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.193967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.194274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.194282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.194589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.194918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.194926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.195121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.195288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.195296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.195482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.195696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.195703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.196010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.196316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.196323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.196610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.196931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.196939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.197240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.197574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.197581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.197889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.198198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.198206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.198526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.198701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.198708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.199031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.199340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.199348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.199650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.199947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.199956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.199996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.200256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.200264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.200575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.200897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.200905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.201220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.201548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.201556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.201866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.202026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.202035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.202328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.202666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.202674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.202974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.203324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.203333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.203512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.203699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.203706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.203966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.204301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.204310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.204633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.204975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.204982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.205299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.205467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.205475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.205794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.206129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.206137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.206483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.206790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.206798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.207006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.207287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.207295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.207686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.208000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.208007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.208341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.208660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.208668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.244 qpair failed and we were unable to recover it. 00:32:47.244 [2024-06-11 12:27:00.208991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.244 [2024-06-11 12:27:00.209266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.209274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.209581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.209899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.209908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.210211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.210558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.210566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.210795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.211028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.211037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.211356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.211673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.211680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.211993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.212317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.212325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.212620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.212971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.212978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.213176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.213489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.213497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.213707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.213911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.213919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.214127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.214401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.214408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.214573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.214946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.214954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.215229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.215526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.215535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.215827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.216169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.216177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.216494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.216794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.216802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.217092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.217363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.217370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.217659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.217974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.217981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.218278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.218614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.218621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.218939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.219150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.219158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.219525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.219845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.219853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.220206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.220406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.220414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.220589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.220862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.220870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.220927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.221241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.221250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.221532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.221835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.221843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.222163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.222385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.222394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.222683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.222977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.222985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.223314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.223590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.223598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.223810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.223915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.223922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.224229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.224598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.224606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.245 qpair failed and we were unable to recover it. 00:32:47.245 [2024-06-11 12:27:00.224950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.245 [2024-06-11 12:27:00.225309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.225318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.225613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.225848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.225856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.226222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.226534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.226542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.226839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.227098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.227106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.227436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.227773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.227781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.228087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.228360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.228367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.228702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.229023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.229031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.229361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.229642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.229650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.229944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.230144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.230153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.230463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.230619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.230628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.230954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.231268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.231276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.231598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.231753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.231762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.232079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.232358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.232374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.232586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.232887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.232894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.233084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.233377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.233385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.233693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.234003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.234011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.234363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.234675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.234684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.234990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.235288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.235296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.235615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.235916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.235924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.236249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.236459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.236465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.236782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.237090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.237100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.237415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.237586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.237594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.237888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.238107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.238114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.238431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.238748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.238756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.239033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.239326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.239334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.239455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.239722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.239730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.240047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.240340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.240347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.240536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.240873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.240881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.241188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.241394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.246 [2024-06-11 12:27:00.241401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.246 qpair failed and we were unable to recover it. 00:32:47.246 [2024-06-11 12:27:00.241710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.242012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.242025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.242311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.242611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.242621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.242855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.243048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.243057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.243380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.243711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.243719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.243864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.244028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.244036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.244222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.244518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.244527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.244828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.245080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.245087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.245387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.245694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.245702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.246012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.246113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.246122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.246420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.246717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.246724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.247007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.247218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.247226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.247510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.247846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.247856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.248169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.248473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.248482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.248805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.248973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.248981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.249267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.249581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.249589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.249889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.250281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.250289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.250599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.250912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.250920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.251255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.251459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.251466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.251768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.252206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.252219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.252517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.252833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.252842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.253070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.253256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.253264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.253541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.253843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.253854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.254126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.254453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.254462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.254796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.255090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.255098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.255326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.255640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.255648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.256054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.256339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.256346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.256662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.256993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.257001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.247 qpair failed and we were unable to recover it. 00:32:47.247 [2024-06-11 12:27:00.257400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.257739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.247 [2024-06-11 12:27:00.257747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.248 qpair failed and we were unable to recover it. 00:32:47.248 [2024-06-11 12:27:00.257917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.258209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.258219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.258425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.258689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.258697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.258988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.259297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.259305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.259629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.259931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.259939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.260226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.260517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.260525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.260740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.261000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.261010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.261315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.261509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.261517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.261813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.262183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.262192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.262510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.262825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.262834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.263063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.263395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.263404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.263694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.263980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.263989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.264180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.264454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.264462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.264773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.265091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.265100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.265406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.265566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.265574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.265872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.266078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.266087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.266443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.266780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.266788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.267050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.267367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.267376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.267675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.267959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.267968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.268163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.268466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.268473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.268796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.268958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.268967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.269283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.269600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.269608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.269744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.269955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.269963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.270199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.270535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.518 [2024-06-11 12:27:00.270543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.518 qpair failed and we were unable to recover it. 00:32:47.518 [2024-06-11 12:27:00.270840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.271001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.271010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.271331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.271645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.271654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.271824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.272096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.272105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.272399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.272537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.272546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.272806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.273020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.273029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.273321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.273496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.273504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.273697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.273882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.273890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.274237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.274509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.274517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.274809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.275077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.275086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.275400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.275687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.275695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.276000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.276284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.276293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.276628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.276923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.276931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.277127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.277424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.277433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.277730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.278002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.278011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.278298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.278595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.278603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.278948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.279133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.279142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.279430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.279521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.279529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.279837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.280185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.280193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.280562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.280858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.280866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.281165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.281487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.281495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.281816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.282162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.282170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.282500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.282771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.282778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.282985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.283096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.283104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.283302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.283497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.283508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.283676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.284020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.284029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.284311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.284625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.284633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.284941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.285106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.285114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.285305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.285594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.519 [2024-06-11 12:27:00.285602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.519 qpair failed and we were unable to recover it. 00:32:47.519 [2024-06-11 12:27:00.285929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.286124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.286133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.286438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.286611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.286618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.286890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.287082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.287091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.287260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.287517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.287525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.287823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.288036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.288044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.288379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.288675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.288684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.288884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.289213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.289221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.289551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.289867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.289875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.290186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.290517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.290525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.290836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.290917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.290926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.291234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.291565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.291573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.291768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.292089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.292098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.292376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.292674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.292682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.292989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.293270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.293279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.293592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.293915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.293923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.294128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.294428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.294436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.294692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.294990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.294997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.295325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.295658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.295666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.296024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.296334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.296342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.296665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.297005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.297012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.297132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.297429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.297437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.297602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.297941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.297949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.298133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.298322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.298331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.298629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.298967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.298974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.299240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.299571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.299579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.299892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.300074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.300082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.300288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.300510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.300518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.300673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.300979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.300987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.520 qpair failed and we were unable to recover it. 00:32:47.520 [2024-06-11 12:27:00.301168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.520 [2024-06-11 12:27:00.301493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.301501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.301819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.302175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.302183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.302496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.302653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.302662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.302751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.303064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.303073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.303406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.303688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.303696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.303992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.304356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.304364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.304674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.304986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.304994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.305232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.305528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.305536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.305835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.306161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.306170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.306488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.306708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.306716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.307024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.307269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.307276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.307322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.307644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.307652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.307981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.308192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.308200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.308458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.308624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.308633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.308945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.309257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.309265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.309563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.309873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.309881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.310067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.310398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.310406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.310625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.310932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.310939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.311257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.311547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.311555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.311892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.312115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.312123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.312401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.312742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.312749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.313074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.313457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.313465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.313777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.313964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.313972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.314300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.314492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.314500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.314798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.315048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.315055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.315393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.315636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.315644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.315939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.316168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.316176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.316516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.316803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.316811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.521 qpair failed and we were unable to recover it. 00:32:47.521 [2024-06-11 12:27:00.317143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.317463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.521 [2024-06-11 12:27:00.317471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.317784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.317904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.317913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.318007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.318189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.318198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.318551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.318848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.318856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.319163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.319483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.319491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.319790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.319986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.319994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.320183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.320454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.320462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.320778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.321120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.321129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.321470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.321727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.321734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.322002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.322195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.322203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.322508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.322780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.322788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.323099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.323476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.323484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.323768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.323985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.323993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.324165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.324351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.324359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.324572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.324840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.324848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.325052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.325247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.325256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.325566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.325791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.325799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.326083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.326389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.326399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.326744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.327062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.327070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.327364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.327685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.327693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.327849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.328164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.328173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.328494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.328702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.328710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.329023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.329340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.329348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.522 [2024-06-11 12:27:00.329651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.329943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.522 [2024-06-11 12:27:00.329952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.522 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.330220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.330549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.330557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.330845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.331110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.331118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.331301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.331629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.331637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.331964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.332276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.332288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.332626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.332918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.332926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.333282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.333462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.333470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.333782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.333949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.333957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.334308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.334567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.334575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.334908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.334989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.334998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.335331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.335624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.335632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.335924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.336089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.336098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.336168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.336487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.336496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.336783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.337085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.337093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.337359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.337530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.337539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.337621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.337919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.337927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.338244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.338567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.338575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.338749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.339064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.339072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.339399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.339746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.339754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.340114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.340464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.340472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.340767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.341070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.341078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.341390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.341685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.341693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.341880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.342230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.342238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.342453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.342642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.342650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.342971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.343211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.343220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.343532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.343613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.343622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.343926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.344302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.344311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.344634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.344938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.344946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.345122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.345499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.523 [2024-06-11 12:27:00.345507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.523 qpair failed and we were unable to recover it. 00:32:47.523 [2024-06-11 12:27:00.345806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.346060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.346068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.346405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.346736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.346744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.346907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.347111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.347119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.347388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.347697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.347705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.348026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.348342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.348350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.348670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.349010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.349028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.349342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.349646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.349655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.349931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.350061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.350070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.350375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.350558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.350566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.350873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.351184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.351192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.351591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.351877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.351885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.352221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.352411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.352420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.352609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.352932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.352940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.353241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.353569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.353577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.353825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.354064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.354072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.354382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.354723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.354730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.354769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.355076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.355084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.355407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.355720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.355728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.356039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.356437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.356445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.356667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.356985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.356993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.357316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.357644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.357652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.357872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.358067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.358075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.358180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.358399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.358407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.358757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.359075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.359083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.359413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.359725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.359732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.360030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.360245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.360254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.360462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.360764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.360771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.361102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.361421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.524 [2024-06-11 12:27:00.361430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.524 qpair failed and we were unable to recover it. 00:32:47.524 [2024-06-11 12:27:00.361758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.362099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.362107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.362445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.362621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.362629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.362950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.363261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.363269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.363644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.363842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.363850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.364085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.364277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.364285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.364564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.364831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.364839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.365172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.365487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.365495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.365805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.366103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.366111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.366456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.366833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.366841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.367120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.367458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.367466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.367780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.368064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.368072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.368412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.368722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.368730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.369067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.369389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.369397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.369760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.370063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.370071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.370306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.370641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.370649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.370981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.371411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.371420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.371750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.372081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.372089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.372477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.372774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.372782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.372939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.373287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.373296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.373609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.373912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.373920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.374320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.374624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.374632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.374946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.375207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.375215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.375512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.375783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.375790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.375990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.376257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.376265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.376572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.376912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.376920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.377208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.377534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.377542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.377723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.378047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.378055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.378193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.378453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.378461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.525 qpair failed and we were unable to recover it. 00:32:47.525 [2024-06-11 12:27:00.378799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.525 [2024-06-11 12:27:00.378983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.378991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.379323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.379603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.379611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.379793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.380091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.380099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.380441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.380755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.380762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.381054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.381128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.381136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.381427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.381687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.381695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.381874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.382168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.382177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.382510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.382742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.382750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.382920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.383098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.383107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.383397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.383737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.383746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.384073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.384406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.384415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.384576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.384865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.384874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.385146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.385480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.385488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.385800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.386036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.386045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.386330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.386648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.386656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.386846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.387129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.387137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.387441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.387619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.387628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.387830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.388103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.388111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.388475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.388801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.388809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.389102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.389333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.389341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.389675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.389998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.390006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.390212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.390428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.390435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.390653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.390970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.390978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.391297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.391464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.391473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.391757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.391943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.391952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.392278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.392596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.392603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.392926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.393248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.393256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.393556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.393873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.393881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.526 qpair failed and we were unable to recover it. 00:32:47.526 [2024-06-11 12:27:00.394059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.394251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.526 [2024-06-11 12:27:00.394260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.394479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.394700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.394708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.394908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.395114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.395122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.395300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.395579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.395587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.395920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.396287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.396295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.396638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.396841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.396848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.397156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.397339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.397347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.397724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.398002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.398010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.398316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.398495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.398503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.398923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.399244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.399253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.399553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.399846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.399854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.400123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.400375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.400383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.400694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.401005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.401013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.401270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.401601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.401609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.401929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.402263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.402271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.402603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.402805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.402813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.403124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.403457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.403465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.403672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.403957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.403964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.404270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.404595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.404602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.404890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.405193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.405201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.405511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.405677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.405685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.406005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.406288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.406296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.406606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.406868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.406876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.406954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.407127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.407135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.407304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.407593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.527 [2024-06-11 12:27:00.407602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.527 qpair failed and we were unable to recover it. 00:32:47.527 [2024-06-11 12:27:00.407909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.408239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.408247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.408557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.408638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.408646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.408942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.409240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.409248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.409541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.409610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.409618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.409810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.409876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.409884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.410211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.410459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.410467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.410681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.410869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.410877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.411259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.411434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.411444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.411755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.412054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.412062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.412286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.412607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.412615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.412946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.413244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.413252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.413562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.413822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.413830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.414101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.414394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.414402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.414714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.415009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.415023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.415353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.415534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.415543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.415722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.415992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.416000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.416084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.416362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.416370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.416677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.417011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.417026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.417361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.417651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.417658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.417945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.418354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.418362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.418664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.418999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.419007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.419326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.419504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.419512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.419887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.420208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.420216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.420544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.420762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.420770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.420969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.421160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.421168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.421326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.421606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.421614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.421827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.421996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.422004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.422205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.422476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.528 [2024-06-11 12:27:00.422486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.528 qpair failed and we were unable to recover it. 00:32:47.528 [2024-06-11 12:27:00.422785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.423073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.423081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.423402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.423688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.423697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.423997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.424328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.424336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.424524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.424845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.424853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.425179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.425471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.425478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.425650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.425840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.425848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.426162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.426499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.426507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.426809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.427107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.427115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.427298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.427597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.427605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.427937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.428272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.428282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.428579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.428899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.428907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.429203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.429544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.429552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.429869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.430191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.430200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.430556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.430887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.430895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.431118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.431362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.431370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.431670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.431745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.431752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.432098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.432383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.432391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.432712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.432937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.432945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.433257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.433553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.433561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.433849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.434163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.434172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.434497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.434766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.434774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.435054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.435318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.435327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.435615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.435816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.435824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.436129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.436405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.436413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.436599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.436887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.436895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.437214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.437522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.437530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.437725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.438055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.438063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.438406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.438676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.529 [2024-06-11 12:27:00.438684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.529 qpair failed and we were unable to recover it. 00:32:47.529 [2024-06-11 12:27:00.438998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.439239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.439248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.439557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.439882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.439890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.440100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.440446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.440454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.440669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.440977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.440985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.441333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.441672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.441680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.441904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.442079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.442087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.442269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.442562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.442570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.442761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.443044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.443053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.443364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.443650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.443658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.444059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.444372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.444380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.444582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.444753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.444761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.444942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.445137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.445145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.445493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.445692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.445700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.445871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.446194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.446202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.446546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.446737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.446745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.446955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.447071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.447080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.447243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.447551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.447559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.447878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.448200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.448208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.448531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.448841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.448848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.449143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.449368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.449376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.449689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.450007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.450015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.450327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.450645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.450653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.450948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.451266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.451274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.451612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.451907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.451914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.452239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.452443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.452451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.452608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.452735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.452743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.452919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.453190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.453198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.453516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.453770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.453778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.530 qpair failed and we were unable to recover it. 00:32:47.530 [2024-06-11 12:27:00.453956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.454159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.530 [2024-06-11 12:27:00.454167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.454419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.454577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.454585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.454877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.455061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.455069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.455220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.455510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.455518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.455831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.456101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.456109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.456424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.456717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.456725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.456914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.457103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.457111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.457392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.457722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.457730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.458042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.458135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.458143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.458362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.458575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.458583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.458889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.459068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.459076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.459348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.459536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.459544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.459695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.460059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.460067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.460293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.460597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.460605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.460943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.461277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.461286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.461600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.461783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.461791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.462080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.462267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.462275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.462435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.462578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.462586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.462904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.463304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.463312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.463637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.463902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.463910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.464123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.464510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.464517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.464713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.464902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.464909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.465218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.465512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.465521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.465901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.466073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.466082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.466373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.466677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.466685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.466986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.467328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.467336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.467579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.467872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.467880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.467946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.468266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.468273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.531 qpair failed and we were unable to recover it. 00:32:47.531 [2024-06-11 12:27:00.468488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.531 [2024-06-11 12:27:00.468685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.468693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.468866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.469032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.469040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.469335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.469652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.469660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.469727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.470034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.470043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.470248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.470563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.470572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.470780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.470942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.470949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.471283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.471598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.471606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.471794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.472093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.472101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.472429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.472678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.472686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.472856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.473144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.473152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.473423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.473748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.473755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.474098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.474352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.474360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.474513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.474712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.474720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.474959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.475026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.475035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.475368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.475547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.475555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.475881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.476190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.476199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.476509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.476819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.476827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.477022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.477319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.477326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.477614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.477934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.477943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.478258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.478463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.478472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.478780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.478988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.478996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.479233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.479570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.479577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.479878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.480190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.480199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.532 qpair failed and we were unable to recover it. 00:32:47.532 [2024-06-11 12:27:00.480384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.480683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.532 [2024-06-11 12:27:00.480690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.481009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.481324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.481332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.481633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.481842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.481850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.482195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.482537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.482545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.482852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.483023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.483032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.483342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.483640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.483648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.483960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.484147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.484155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.484409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.484743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.484751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.485046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.485334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.485341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.485666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.485964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.485972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.486321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.486605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.486613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.486935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.487052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.487060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.487363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.487648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.487656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.487909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.488207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.488216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.488539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.488729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.488736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.488907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.489143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.489151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.489374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.489656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.489664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.489967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.490259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.490267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.490579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.490876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.490884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.491198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.491543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.491551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.491737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.492079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.492087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.492348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.492686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.492694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.493000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.493214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.493223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.493526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.493842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.493849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.494067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.494347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.494355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.494565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.494833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.494841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.495066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.495343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.495351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.495558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.495757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.495765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.533 [2024-06-11 12:27:00.495829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.496016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.533 [2024-06-11 12:27:00.496028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.533 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.496074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.496382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.496390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.496637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.496866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.496874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.497198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.497516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.497524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.497836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.498006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.498015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.498326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.498647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.498658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.498971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.499223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.499232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.499564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.499934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.499942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.500133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.500415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.500423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.500741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.501061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.501069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.501434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.501621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.501629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.501955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.502181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.502189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.502524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.502843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.502852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.503118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.503358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.503367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.503690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.504023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.504032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.504230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.504499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.504509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.504854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.505188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.505197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.505528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.505708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.505716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.505932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.506230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.506238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.506574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.506889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.506897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.507138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.507454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.507461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.507773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.507997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.508005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.508279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.508593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.508601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.508890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.509145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.509154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.509435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.509774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.509782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.510178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.510510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.510519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.510828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.511122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.511131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.511429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.511588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.511596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.534 [2024-06-11 12:27:00.511788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.511973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.534 [2024-06-11 12:27:00.511981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.534 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.512302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.512611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.512619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.512896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.513216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.513225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.513498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.513825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.513833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.514046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.514330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.514338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.514633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.514944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.514952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.515320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.515650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.515658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.515852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.516104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.516114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.516439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.516707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.516715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.517091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.517400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.517407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.517715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.517911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.517919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.518179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.518360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.518368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.518547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.518878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.518886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.519198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.519492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.519500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.519802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.520047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.520055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.520440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.520772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.520781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.521093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.521404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.521413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.521615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.521793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.521801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.522139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.522482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.522490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.522819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.523119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.523127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.523464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.523781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.523789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.524086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.524402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.524410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.524716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.524962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.524971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.525288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.525460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.525469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.525777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.525981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.525989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.526272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.526340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.526348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.526659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.527005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.527013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.527349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.527657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.527665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.527994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.528346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.535 [2024-06-11 12:27:00.528354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.535 qpair failed and we were unable to recover it. 00:32:47.535 [2024-06-11 12:27:00.528670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.528982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.528991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.529378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.529671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.529678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.529973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.530300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.530308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.530596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.530915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.530923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.531237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.531552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.531560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.531865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.532141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.532149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.532482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.532788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.532796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.533126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.533461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.533469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.533774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.534088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.534096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.534457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.534774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.534782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.535072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.535407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.535415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.535707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.536024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.536032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.536317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.536630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.536638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.536953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.537234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.537243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.537557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.537869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.537877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.538048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.538323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.538331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.538657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.538972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.538980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.539289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.539603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.539611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.539966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.540249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.540257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.540584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.540905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.540913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.541280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.541484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.541493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.541811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.542075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.542083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.536 [2024-06-11 12:27:00.542422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.542752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.536 [2024-06-11 12:27:00.542760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.536 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.543089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.543317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.543326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.543654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.544004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.544012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.544237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.544520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.544528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.544834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.545123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.545131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.545464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.545782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.545790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.546085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.546424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.546433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.546737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.547047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.547056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.547390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.547722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.547730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.547929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.548190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.548198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.548532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.548833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.548841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.549009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.549359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.549368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.549570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.549763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.549771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.550056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.550338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.550346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.550619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.550962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.550970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.551261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.551569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.551576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.551903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.552188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.552196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.552568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.552865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.552873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.553190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.553525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.553533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.553732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.554000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.554007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.808 qpair failed and we were unable to recover it. 00:32:47.808 [2024-06-11 12:27:00.554303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.808 [2024-06-11 12:27:00.554613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.554621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.554952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.555264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.555273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.555580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.555868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.555876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.556190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.556489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.556496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.556869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.556942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.556950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.557056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.557355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.557362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.557500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.557784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.557792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.558055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.558406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.558414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.558765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.559066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.559074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.559365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.559686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.559695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.559871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.560198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.560206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.560518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.560838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.560846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.561167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.561485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.561494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.561802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.562125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.562133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.562419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.562735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.562743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.563060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.563338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.563346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.563637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.563975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.563983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.564354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.564697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.564705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.565035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.565318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.565326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.565637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.565819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.565827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.566161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.566510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.566517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.566742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.566945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.566953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.567282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.567577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.567585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.567897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.568131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.568140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.568481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.568775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.568784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.568980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.569140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.569149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.569320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.569609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.569617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.569925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.570139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.570147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.809 qpair failed and we were unable to recover it. 00:32:47.809 [2024-06-11 12:27:00.570318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.809 [2024-06-11 12:27:00.570662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.570670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.570990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.571296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.571304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.571554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.571857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.571865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.572070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.572410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.572417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.572722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.573062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.573070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.573416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.573582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.573590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.573867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.574190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.574197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.574518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.574812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.574820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.575134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.575466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.575474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.575825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.576099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.576107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.576429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.576628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.576637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.576946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.577264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.577272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.577602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.577922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.577930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.578145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.578406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.578413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.578698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.579010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.579021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.579284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.579593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.579601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.579914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.580231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.580239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.580556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.580855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.580863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.581067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.581386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.581393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.581704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.582036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.582044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.582291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.582602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.582610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.582762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.582852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.582860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.583085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.583362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.583370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.583683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.583851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.583859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.583897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.584207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.584216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.584494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.584788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.584796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.585095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.585414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.585422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.585742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.586046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.586054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.810 [2024-06-11 12:27:00.586277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.586594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.810 [2024-06-11 12:27:00.586602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.810 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.586933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.587129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.587140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.587451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.587647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.587655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.587856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.588133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.588141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.588512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.588800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.588808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.588963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.589267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.589276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.589540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.589796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.589804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.590107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.590367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.590375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.590658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.590999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.591006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.591201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.591552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.591560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.591862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.592160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.592169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.592505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.592778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.592788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.593108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.593401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.593408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.593740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.594038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.594046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.594315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.594585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.594594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.594894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.595192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.595200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.595530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.595838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.595846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.596154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.596445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.596453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.596670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.596964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.596971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.597281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.597599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.597607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.597932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.598188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.598196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.598523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.598841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.598850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.599160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.599468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.599476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.599685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.599967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.599975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.600311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.600620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.600629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.600937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.601267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.601275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.601509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.601798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.601806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.602009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.602303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.602310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.602485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.602638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.602646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.811 qpair failed and we were unable to recover it. 00:32:47.811 [2024-06-11 12:27:00.602970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.811 [2024-06-11 12:27:00.603293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.603301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.603603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.603824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.603832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.604133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.604502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.604512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.604727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.604914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.604922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.605314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.605633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.605641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.605952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.606271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.606279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.606588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.606905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.606913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.607126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.607439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.607447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.607772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.607983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.607991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.608264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.608460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.608470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.608642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.608812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.608820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.609122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.609431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.609439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.609619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.609925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.609933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.610221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.610488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.610496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.610803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.611133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.611141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.611324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.611548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.611556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.611733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.612050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.612059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.612281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.612605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.612613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.612689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.612984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.612992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.613097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.613410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.613418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.613700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.614014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.614026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.614317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.614642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.614649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.614948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.615220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.615228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.812 qpair failed and we were unable to recover it. 00:32:47.812 [2024-06-11 12:27:00.615549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.812 [2024-06-11 12:27:00.615766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.615774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.616123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.616427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.616435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.616745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.617065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.617074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.617406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.617731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.617738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.618070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.618278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.618286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.618613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.618936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.618944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.619120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.619427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.619435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.619745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.620061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.620069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.620396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.620695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.620702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.620973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.621296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.621304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.621610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.621949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.621957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.622246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.622582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.622591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.622789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.623104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.623112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.623418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.623722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.623730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.624062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.624285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.624292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.624605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.624885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.624893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.625287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.625575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.625583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.625873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.626189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.626197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.626551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.626769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.626778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.626972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.627272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.627280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.627574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.627791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.627799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.628158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.628436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.628444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.628612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.628985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.628993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.629319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.629635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.629643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.629849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.630069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.630077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.630377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.630699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.630707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.631027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.631207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.631216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.631500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.631800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.631808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.813 qpair failed and we were unable to recover it. 00:32:47.813 [2024-06-11 12:27:00.632090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.813 [2024-06-11 12:27:00.632336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.632344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.632659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.632932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.632940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.633376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.633692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.633700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.634005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.634238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.634246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.634585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.634883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.634891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.635139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.635417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.635425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.635743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.636074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.636082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.636344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.636681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.636689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.636844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.637153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.637161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.637499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.637823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.637831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.638129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.638468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.638476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.638786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.639122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.639130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.639430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.639760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.639769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.640074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.640351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.640359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.640575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.640878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.640886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.641202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.641544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.641552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.641843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.642261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.642269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.642648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.642843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.642851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.643088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.643425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.643433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.643736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.644049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.644058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.644241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.644454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.644462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.644740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.645035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.645044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.645466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.645801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.645808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.646016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.646301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.646309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.646615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.646820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.646828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.647120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.647460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.647467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.647683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.647897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.647905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.648257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.648354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.648361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.648529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.648845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.814 [2024-06-11 12:27:00.648853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.814 qpair failed and we were unable to recover it. 00:32:47.814 [2024-06-11 12:27:00.649065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.649422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.649429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.649719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.650058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.650066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.650386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.650699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.650708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.651055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.651373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.651381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.651693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.651904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.651913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.652207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.652531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.652539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.652844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.653057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.653065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.653394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.653652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.653660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.654003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.654347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.654355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.654667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.654962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.654971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.655304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.655516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.655524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.655815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.656171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.656179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.656460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.656762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.656771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.657082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.657303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.657311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.657404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.657733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.657741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.658014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.658282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.658290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.658571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.658884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.658891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.659212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.659500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.659508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.659723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.660029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.660038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.660261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.660445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.660453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.660646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.660981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.660989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.661158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.661445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.661453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.661634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.661926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.661934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.662140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.662445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.662453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.662741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.663079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.663086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.663362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.663546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.663554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.663817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.664109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.664117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.664307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.664583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.664590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.815 [2024-06-11 12:27:00.664878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.665166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.815 [2024-06-11 12:27:00.665174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.815 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.665394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.665555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.665563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.665886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.666164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.666173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.666356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.666634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.666642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.666853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.667044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.667053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.667370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.667569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.667577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.667842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.668014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.668026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.668332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.668590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.668598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.668769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.669059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.669067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.669374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.669649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.669657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.669870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.670139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.670147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.670452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.670707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.670715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.670922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.671286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.671294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.671473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.671810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.671818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.672141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.672358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.672366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.672652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.672962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.672970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.673285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.673461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.673469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.673673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.674072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.674080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.674320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.674617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.674625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.674901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.675212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.675220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.675550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.675822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.675830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.676144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.676432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.676440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.676666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.676953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.676961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.677287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.677510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.677517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.677839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.678008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.678015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.678176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.678340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.678350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.678596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.678892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.678900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.679188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.679528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.679536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.679712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.679985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.679993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.816 qpair failed and we were unable to recover it. 00:32:47.816 [2024-06-11 12:27:00.680272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.816 [2024-06-11 12:27:00.680567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.680575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.680877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.681189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.681198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.681715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.681985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.681995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.682344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.682675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.682683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.682863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.683144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.683152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.683459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.683783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.683791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.684089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.684428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.684439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.684635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.684896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.684904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.685212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.685586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.685593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.685897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.686195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.686204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.686512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.686710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.686717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.686783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.687158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.687166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.687472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.687700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.687708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.688118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.688386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.688393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.688720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.689035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.689043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.689312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.689607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.689615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.689919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.690303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.690312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.690561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.690742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.690750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.691063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.691333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.691341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.691679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.691975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.691983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.692170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.692500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.692508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.692814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.693085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.693093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.693438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.693607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.693615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.693921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.694007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.694015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.817 qpair failed and we were unable to recover it. 00:32:47.817 [2024-06-11 12:27:00.694363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.817 [2024-06-11 12:27:00.694653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.694660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.694993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.695307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.695315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.695513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.695805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.695814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.696126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.696462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.696469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.696782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.697091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.697099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.697482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.697842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.697849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.698055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.698251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.698259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.698467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.698651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.698659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.698943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.699135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.699144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.699473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.699699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.699708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.699964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.700312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.700321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.700622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.700674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.700682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.701008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.701242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.701250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.701559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.701856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.701864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.702175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.702256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.702264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.702473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.702772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.702780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.703200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.703497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.703505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.703816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.703979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.703987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.704196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.704525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.704533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.704863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.705189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.705197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.705527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.705715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.705723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.706029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.706254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.706263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.706455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.706743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.706751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.707066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.707349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.707357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.707657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.707975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.707983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.708194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.708508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.708516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.708806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.708887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.708896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.709211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.709348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.709355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.818 [2024-06-11 12:27:00.709658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.709835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.818 [2024-06-11 12:27:00.709844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.818 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.710134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.710426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.710434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.710760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.711076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.711084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.711398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.711723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.711731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.712044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.712322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.712330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.712494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.712791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.712799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.713134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.713318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.713326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.713616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.713939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.713947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.714069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.714248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.714257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.714578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.714865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.714873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.715172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.715468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.715476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.715693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.716013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.716026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.716229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.716559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.716567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.716719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.716995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.717003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.717399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.717606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.717614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.717931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.718240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.718248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.718551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.718866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.718873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.719186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.719518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.719526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.719737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.720064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.720072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.720357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.720660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.720668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.720984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.721303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.721311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.721634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.721824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.721832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.722141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.722468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.722476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.722817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.723119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.723127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.723467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.723799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.723807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.724078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.724320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.724328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.724510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.724590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.724599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.724811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.725090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.725098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.725417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.725700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.725708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.819 qpair failed and we were unable to recover it. 00:32:47.819 [2024-06-11 12:27:00.725996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.819 [2024-06-11 12:27:00.726038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.726046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.726351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.726650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.726658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.726954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.727242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.727250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.727546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.727727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.727735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.728022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.728299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.728307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.728591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.728887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.728895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.729203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.729438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.729446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.729742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.730060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.730068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.730421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.730486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.730495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.730665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.730985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.730993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.731225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.731556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.731564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.731679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.731846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.731854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.732177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.732491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.732499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.732838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.733121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.733130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.733344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.733503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.733512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.733788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.734004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.734012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.734388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.734611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.734619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.734925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.735211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.735219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.735525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.735714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.735722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.736023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.736401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.736409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.736599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.736902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.736910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.737224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.737526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.737533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.737723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.738036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.738044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.738339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.738654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.738662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.738794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.739096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.739105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.739449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.739668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.739676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.739987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.740312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.740320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.740618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.740896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.740904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.741085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.741398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.820 [2024-06-11 12:27:00.741406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.820 qpair failed and we were unable to recover it. 00:32:47.820 [2024-06-11 12:27:00.741716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.742047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.742055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.742243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.742553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.742561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.742877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.743077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.743085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.743278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.743610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.743618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.743822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.743906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.743914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.744136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.744420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.744428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.744738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.744944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.744952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.745250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.745564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.745572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.745914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.746097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.746105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.746385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.746706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.746714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.746920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.747235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.747243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.747569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.747892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.747900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.748072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.748260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.748268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.748583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.748942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.748951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.749136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.749364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.749373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.749699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.749895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.749902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.750203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.750537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.750545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.750907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.751089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.751098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.751415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.751731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.751739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.751975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.752261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.752269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.752451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.752716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.752724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.752798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.753108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.753117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.753338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.753662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.753670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.753988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.754185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.754195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.754516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.754699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.754707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.755026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.755231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.755239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.755568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.755857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.755865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.756047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.756165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.756173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.821 [2024-06-11 12:27:00.756421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.756786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.821 [2024-06-11 12:27:00.756794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.821 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.757097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.757405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.757414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.757791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.758030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.758039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.758355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.758599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.758607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.758894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.759257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.759266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.759556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.759865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.759873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.760085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.760367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.760374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.760670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.761000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.761008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.761130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.761340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.761348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.761643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.761958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.761966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.762097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.762373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.762381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.762696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.763024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.763033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.763322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.763639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.763647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.763844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.764034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.764042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.764265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.764558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.764566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.764878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.765192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.765201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.765541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.765774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.765782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.766091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.766409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.766417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.766590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.766863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.766871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.767225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.767488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.767498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.767735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.768028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.768036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.768341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.768655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.768663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.768938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.769260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.769268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.769604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.769909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.769917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.769966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.770319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.770327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.822 qpair failed and we were unable to recover it. 00:32:47.822 [2024-06-11 12:27:00.770638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.822 [2024-06-11 12:27:00.771005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.771013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.771311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.771518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.771526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.771798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.772093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.772101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.772320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.772615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.772623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.772927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.773306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.773315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.773391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.773675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.773683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.773880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.774055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.774064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.774326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.774619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.774627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.774917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.775222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.775230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.775607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.775891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.775899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.776225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.776533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.776540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.776736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.777027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.777035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.777342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.777563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.777571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.777885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.778031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.778039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.778261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.778511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.778520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.778812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.778890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.778898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.779064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.779274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.779281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.779441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.779747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.779755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.780042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.780151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.780159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.780499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.780727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.780734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.781111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.781464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.781472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.781784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.781964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.781972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.782198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.782482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.782490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.782809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.782967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.782975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.783211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.783362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.783372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.783637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.783900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.783907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.784118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.784309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.784317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.784475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.784656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.784664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.784847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.785151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.823 [2024-06-11 12:27:00.785159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.823 qpair failed and we were unable to recover it. 00:32:47.823 [2024-06-11 12:27:00.785481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.785768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.785776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.786048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.786335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.786343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.786618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.786893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.786902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.787226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.787572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.787580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.787887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.788088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.788096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.788377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.788575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.788583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.788917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.789118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.789126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.789423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.789751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.789758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.790154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.790335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.790343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.790568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.790755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.790763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.790884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.791044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.791052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.791361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.791653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.791661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.791962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.792109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.792117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.792466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.792760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.792768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.793101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.793447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.793455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.793753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.794031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.794039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.794232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.794517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.794525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.794708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.794893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.794901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.795241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.795524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.795532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.795855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.796191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.796200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.796385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.796579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.796588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.796929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.797278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.797286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.797621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.797934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.797942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.798269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.798465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.798472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.798660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.799049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.799057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.799411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.799748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.799755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.800096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.800401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.800409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.800740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.800920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.800928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.824 qpair failed and we were unable to recover it. 00:32:47.824 [2024-06-11 12:27:00.801243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.801537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.824 [2024-06-11 12:27:00.801545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.801700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.802009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.802020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.802221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.802483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.802490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.802783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.803104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.803111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.803438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.803740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.803748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.803912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.804070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.804079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.804355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.804704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.804712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.805059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.805386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.805393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.805693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.805788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.805795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.805968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.806345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.806353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.806669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.806891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.806899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.807235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.807580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.807588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.807855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.808044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.808052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.808409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.808732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.808740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.809061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.809395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.809403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.809730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.810061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.810069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.810389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.810689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.810697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.811014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.811326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.811334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.811643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.811826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.811834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.811990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.812194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.812202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.812363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.812648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.812656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.812948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.813278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.813287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.813594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.813899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.813907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.814230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.814547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.814555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.814800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.814876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.814885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.815201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.815546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.815554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.815869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.815906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.815914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.816092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.816391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.816399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.816706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.816931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.816939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.825 qpair failed and we were unable to recover it. 00:32:47.825 [2024-06-11 12:27:00.817300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.825 [2024-06-11 12:27:00.817649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.817657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.817987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.818379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.818387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.818581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.818770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.818777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.818945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.819237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.819245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.819594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.819933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.819940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.820269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.820570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.820578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.820886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.821210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.821218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.821540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.821867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.821875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.822188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.822490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.822498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.822793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.822888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.822895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.823262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.823493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.823501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.823790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.824029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.824036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.824368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.824688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.824697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.824998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.825306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.825314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.825553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.825859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.825867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.826054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.826400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.826408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.826701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.827028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.827036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.827226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.827489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.827497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.827812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.827985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.827993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.828276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.828570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.828578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.828847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.829193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.829201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.829531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.829714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.829723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.830050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.830378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.830386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.830601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.830903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.830911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.826 qpair failed and we were unable to recover it. 00:32:47.826 [2024-06-11 12:27:00.831066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.826 [2024-06-11 12:27:00.831345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-06-11 12:27:00.831353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-06-11 12:27:00.831538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-06-11 12:27:00.831794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-06-11 12:27:00.831802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-06-11 12:27:00.831893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-06-11 12:27:00.832052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-06-11 12:27:00.832060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:47.827 qpair failed and we were unable to recover it. 00:32:47.827 [2024-06-11 12:27:00.832375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.827 [2024-06-11 12:27:00.832707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.098 [2024-06-11 12:27:00.832715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.833026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.833244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.833253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.833639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.833940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.833948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.834200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.834519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.834527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.834597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.834893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.834901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.835164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.835438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.835446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.835639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.835946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.835954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.836255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.836497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.836505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.836815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.837049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.837057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.837338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.837688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.837696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.838012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.838213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.838221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.838440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.838503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.838511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.838776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.839095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.839104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.839388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.839603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.839611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.839953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.840167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.840176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.840512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.840693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.840700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.840983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.841312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.841320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.841600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.841902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.841910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.842299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.842472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.842480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.842774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.842919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.842926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.843304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.843579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.843586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.843780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.843960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.843968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.844228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.844496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.844504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.844795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.845102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.845111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.845290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.845585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.845593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.845894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.846188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.846196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.846508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.846825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.846833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.847044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.847208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.847217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.099 [2024-06-11 12:27:00.847611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.847898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.099 [2024-06-11 12:27:00.847906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.099 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.848142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.848451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.848458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.848781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.849097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.849105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.849296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.849612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.849620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.849929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.850247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.850257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.850487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.850757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.850765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.851073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.851356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.851364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.851664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.851968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.851976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.852143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.852329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.852338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.852608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.852918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.852925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.853220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.853556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.853564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.853844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.854006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.854015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.854224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.854552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.854559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.854747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.854910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.854917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.855110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.855419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.855427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.855748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.856038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.856047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.856358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.856656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.856664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.856960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.857264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.857272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.857589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.857831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.857839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.858150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.858427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.858434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.858715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.859009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.859020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.859162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.859436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.859445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.859752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.860082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.860090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.860459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.860738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.860745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.861064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.861394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.861404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.861578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.861796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.861804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.862090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.862408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.862416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.862715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.862975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.862983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.100 qpair failed and we were unable to recover it. 00:32:48.100 [2024-06-11 12:27:00.863258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.100 [2024-06-11 12:27:00.863428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.863436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.863748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.863988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.863996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.864192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.864360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.864368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.864653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.864966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.864973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.865203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.865501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.865509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.865824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.866104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.866113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.866427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.866595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.866605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.866933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.867340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.867348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.867565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.867733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.867741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.867949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.868266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.868274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.868582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.868854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.868861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.869162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.869438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.869445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.869758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.869948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.869956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.870031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.870310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.870318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.870631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.870948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.870956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.871246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.871491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.871498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.871812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.871876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.871884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.872193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.872458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.872465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.872800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.872883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.872891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.873176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.873510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.873518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.873828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.874151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.874159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.874479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.874795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.874803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.875089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.875276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.875284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.875577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.875936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.875944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.876140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.876405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.876412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.876711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.877015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.877033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.877315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.877629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.877636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.877820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.877990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.877999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.878295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.878604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.101 [2024-06-11 12:27:00.878611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.101 qpair failed and we were unable to recover it. 00:32:48.101 [2024-06-11 12:27:00.878929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.879087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.879096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.879381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.879694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.879701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.879879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.880183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.880191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.880513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.880832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.880840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.881158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.881499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.881507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.881811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.882134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.882143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.882472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.882624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.882633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.882938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.883189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.883197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.883475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.883791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.883799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.884004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.884278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.884286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.884571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.884881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.884889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.885052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.885326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.885334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.885690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.886028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.886036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.886344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.886660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.886668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.886955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.887267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.887276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.887432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.887710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.887718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.888025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.888375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.888382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.888556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.888923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.888931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.889234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.889528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.889535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.889845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.890124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.890131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.890383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.890719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.890726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.890968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.891131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.891140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.891463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.891754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.891762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.892070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.892373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.892380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.892685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.892847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.892855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.893171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.893503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.893511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.893821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.894056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.894064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.894365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.894530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.894539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.102 qpair failed and we were unable to recover it. 00:32:48.102 [2024-06-11 12:27:00.894835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.894987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.102 [2024-06-11 12:27:00.894995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.895183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.895343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.895351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.895632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.895923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.895932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.896230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.896568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.896576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.896889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.897188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.897197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.897510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.897821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.897829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.898038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.898329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.898337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.898647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.898832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.898840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.899155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.899439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.899446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.899750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.900064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.900073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.900393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.900725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.900733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1704836 Killed "${NVMF_APP[@]}" "$@" 00:32:48.103 [2024-06-11 12:27:00.901101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.901418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.901425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.901583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 12:27:00 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:48.103 [2024-06-11 12:27:00.901887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.901895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 12:27:00 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:48.103 [2024-06-11 12:27:00.902118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 12:27:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:48.103 [2024-06-11 12:27:00.902437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.902446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.902607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 12:27:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:48.103 [2024-06-11 12:27:00.902791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.902800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 12:27:00 -- common/autotest_common.sh@10 -- # set +x 00:32:48.103 [2024-06-11 12:27:00.903063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.903360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.903368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.903681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.903994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.904002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.904173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.904476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.904484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.904784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.905120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.905129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.905448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.905739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.905746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.906059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.906374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.906382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.906674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.906834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.906842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.103 qpair failed and we were unable to recover it. 00:32:48.103 [2024-06-11 12:27:00.907107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.907416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.103 [2024-06-11 12:27:00.907425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.907636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.907951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.907959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.908243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.908559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.908566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.908872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.909156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.909164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.909485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 12:27:00 -- nvmf/common.sh@469 -- # nvmfpid=1705925 00:32:48.104 [2024-06-11 12:27:00.909802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.909810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 12:27:00 -- nvmf/common.sh@470 -- # waitforlisten 1705925 00:32:48.104 [2024-06-11 12:27:00.910096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 12:27:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:48.104 12:27:00 -- common/autotest_common.sh@819 -- # '[' -z 1705925 ']' 00:32:48.104 [2024-06-11 12:27:00.910429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.910437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 12:27:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.104 12:27:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:48.104 [2024-06-11 12:27:00.910745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 12:27:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.104 [2024-06-11 12:27:00.911042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.911051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 12:27:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:48.104 12:27:00 -- common/autotest_common.sh@10 -- # set +x 00:32:48.104 [2024-06-11 12:27:00.911374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.911675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.911683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.911986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.912274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.912282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.912638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.912964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.912972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.913246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.913584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.913592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.913801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.914124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.914132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.914475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.914766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.914773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.915091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.915430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.915437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.915540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.915751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.915759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.916052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.916438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.916446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.916635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.916956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.916964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.917221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.917487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.917494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.917797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.918099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.918107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.918436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.918698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.918706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.919040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.919199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.919207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.919502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.919849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.919857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.920187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.920533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.920541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.920855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.921254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.921262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.921567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.921912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.921919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.921982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.922145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.922153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.104 [2024-06-11 12:27:00.922461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.922673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.104 [2024-06-11 12:27:00.922681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.104 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.922877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.923181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.923189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.923499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.923793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.923801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.923997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.924312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.924320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.924631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.924946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.924954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.925268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.925467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.925475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.925780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.926117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.926125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.926464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.926769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.926778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.927086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.927429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.927437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.927748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.928092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.928100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.928283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.928468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.928476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.928795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.928995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.929003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.929323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.929504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.929512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.929706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.929863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.929875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.930242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.930436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.930444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.930676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.930978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.930986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.931262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.931580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.931588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.931754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.932034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.932042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.932313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.932647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.932655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.932826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.932992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.932999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.933273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.933607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.933615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.933929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.934228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.934236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.934523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.934811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.934818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.935121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.935465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.935473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.935779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.936000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.936008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.936299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.936641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.936649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.936857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.937075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.937083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.937407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.937707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.937715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.937916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.938218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.938226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.105 qpair failed and we were unable to recover it. 00:32:48.105 [2024-06-11 12:27:00.938552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.105 [2024-06-11 12:27:00.938743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.938753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.939073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.939387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.939394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.939754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.940087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.940095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.940419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.940577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.940585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.940898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.941189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.941197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.941434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.941654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.941662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.941858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.942034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.942042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.942359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.942709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.942717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.943029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.943243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.943251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.943470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.943682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.943689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.944008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.944255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.944263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.944491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.944666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.944675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.944994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.945291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.945299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.945595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.945736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.945744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.945885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.946203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.946212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.946552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.946877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.946885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.947194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.947434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.947442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.947610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.947857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.947865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.948183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.948514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.948522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.948857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.949074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.949082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.949404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.949717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.949724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.950091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.950403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.950411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.950575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.950899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.950908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.951225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.951534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.951543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.951823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.952019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.952028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.952366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.952542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.952550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.952865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.953153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.953160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.953363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.953690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.953697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.953863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.954167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.106 [2024-06-11 12:27:00.954175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.106 qpair failed and we were unable to recover it. 00:32:48.106 [2024-06-11 12:27:00.954526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.954689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.954698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.954874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.955160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.955170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.955500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.955825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.955833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.956152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.956448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.956456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.956776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.957062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.957071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.957406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.957626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.957634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.957945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.958229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.958237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.958550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.958884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.958892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.959069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.959436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.959444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.959704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.959921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.959929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.960142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.960435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.960442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.960642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.960951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.960962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.961282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.961449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.961457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.961624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.961932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.961940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.962124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.962461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.962470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.962779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.962949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.962958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.963268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.963606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.963614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.963964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.964153] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:48.107 [2024-06-11 12:27:00.964201] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.107 [2024-06-11 12:27:00.964228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.964235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.964570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.964921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.964929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.965190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.965423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.965431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.965729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.966037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.966045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.966385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.966696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.966704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.966917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.967204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.967212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.967529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.967808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.967815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.968136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.968429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.968437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.968750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.969071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.969079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.969409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.969733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.969741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.107 [2024-06-11 12:27:00.970055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.970355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.107 [2024-06-11 12:27:00.970363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.107 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.970648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.970977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.970985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.971349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.971655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.971663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.971981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.972349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.972357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.972677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.973002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.973010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.973326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.973628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.973636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.973951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.974279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.974288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.974611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.974796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.974804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.975136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.975328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.975335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.975484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.975691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.975699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.975875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.976098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.976106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.976431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.976610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.976618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.976928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.977230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.977238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.977590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.977878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.977886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.978104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.978393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.978401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.978745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.979091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.979100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.979423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.979603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.979611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.979946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.980132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.980141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.980465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.980782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.980790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.981095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.981388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.981396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.981693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.981734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.981742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.982064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.982369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.982377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.982663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.982981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.982989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.983304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.983622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.983630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.983938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.984220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.984228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.108 qpair failed and we were unable to recover it. 00:32:48.108 [2024-06-11 12:27:00.984537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.108 [2024-06-11 12:27:00.984879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.984887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.985191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.985522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.985530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.985853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.986048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.986056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.986370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.986686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.986694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.987015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.987302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.987310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.987665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.987985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.987994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.988288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.988486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.988494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.988822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.989139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.989147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.989460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.989605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.989613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.989822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.989991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.989999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.990310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.990631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.990639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.990946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.991266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.991274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.991569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.991892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.991900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.992227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.992571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.992579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.992888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.993186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.993193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.993524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.993861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.993869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.994185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.994522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.994529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.994840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.995137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.995145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.995450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.995741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.995749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.996056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.996268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.996276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.996432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.996714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.996723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.109 [2024-06-11 12:27:00.997036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.997317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.997325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.997639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.997984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.997992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.998282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.998575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.998583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.998884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.999197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.999205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.999385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.999560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:00.999568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:00.999771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:01.000118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:01.000126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:01.000447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:01.000772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:01.000780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.109 [2024-06-11 12:27:01.000976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:01.001245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.109 [2024-06-11 12:27:01.001253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.109 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.001566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.001889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.001897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.002193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.002531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.002540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.002851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.003179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.003187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.003401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.003702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.003709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.003745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.004051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.004059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.004322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.004656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.004664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.004951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.005145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.005154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.005487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.005705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.005713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.005902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.006221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.006229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.006416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.006748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.006755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.007072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.007421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.007429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.007622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.007668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.007676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.007947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.008230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.008239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.008554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.008885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.008893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.009215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.009575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.009582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.009788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.010091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.010099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.010425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.010708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.010716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.010892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.011241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.011249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.011443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.011767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.011775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.012124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.012322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.012329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.012637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.012846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.012854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.013196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.013502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.013510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.013828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.014155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.014163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.014483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.014678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.014686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.015028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.015384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.015392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.015711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.016039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.016048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.016385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.016697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.016705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.016902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.017245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.110 [2024-06-11 12:27:01.017253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.110 qpair failed and we were unable to recover it. 00:32:48.110 [2024-06-11 12:27:01.017434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.017743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.017751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.017928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.018216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.018224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.018557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.018881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.018889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.019099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.019259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.019266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.019564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.019751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.019759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.020085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.020342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.020349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.020669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.020989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.020997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.021316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.021639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.021647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.021959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.022143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.022150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.022466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.022759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.022766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.023080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.023414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.023422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.023758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.023930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.023938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.024219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.024528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.024535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.024894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.025185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.025193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.025581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.025870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.025878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.026096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.026393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.026400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.026720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.027079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.027088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.027405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.027727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.027734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.028059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.028385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.028393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.028709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.029036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.029044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.029401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.029697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.029705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.030040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.030326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.030334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.030686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.030830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.030839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.031145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.031470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.031478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.031796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.032094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.032102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.032345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.032655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.032662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.032822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.033138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.033146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.033494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.033839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.033846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.111 [2024-06-11 12:27:01.034159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.034470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.111 [2024-06-11 12:27:01.034478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.111 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.034795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.035090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.035098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.035419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.035742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.035751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.036060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.036249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.036257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.036549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.036867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.036876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.037185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.037259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.037267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.037559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.037888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.037896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.038201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.038501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.038509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.038836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.039162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.039170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.039487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.039846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.039854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.040177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.040510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.040518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.040694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.041009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.041020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.041319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.041652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.041660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.041984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.042293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.042301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.042597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.042790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.042799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.043151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.043359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.043366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.043685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.043851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.043859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.044133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.044458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.044465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.044785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.045129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.045137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.045451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.045649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.045657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.045809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.046168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.046177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.046458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.046774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.046782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.047089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.047379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.047387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.047742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.048040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.048048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.048226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.048535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.048546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.112 qpair failed and we were unable to recover it. 00:32:48.112 [2024-06-11 12:27:01.048858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.112 [2024-06-11 12:27:01.049198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.049206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.049528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.049860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.049868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.049920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:48.113 [2024-06-11 12:27:01.050011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.050190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.050199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.050515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.050865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.050873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.051180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.051528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.051536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.051854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.052173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.052182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.052483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.052783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.052791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.052992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.053293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.053301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.053610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.053758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.053766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.053982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.054152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.054160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.054332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.054621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.054629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.054947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.055267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.055276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.055590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.055929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.055938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.056253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.056436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.056444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.056760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.057074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.057083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.057408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.057715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.057723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.058055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.058338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.058346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.058649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.058946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.058954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.059269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.059582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.059591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.059760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.060081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.060090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.060411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.060730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.060739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.061050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.061100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.061108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.061410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.061576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.061584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.061851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.062163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.062172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.062490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.062806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.062815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.063051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.063258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.063266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.063558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.063850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.063860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.064184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.064398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.064406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.113 qpair failed and we were unable to recover it. 00:32:48.113 [2024-06-11 12:27:01.064704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.113 [2024-06-11 12:27:01.065026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.065035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.065339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.065708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.065719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.066060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.066213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.066222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.066415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.066735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.066743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.067055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.067336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.067344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.067641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.067830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.067839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.068158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.068445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.068454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.068768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.069164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.069173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.069517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.069833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.069841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.070151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.070457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.070465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.070624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.070918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.070927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.071274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.071443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.071453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.071759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.072094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.072103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.072435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.072691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.072700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.072995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.073177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.073185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.073516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.073838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.073846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.074131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.074448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.074457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.074773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.075086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.075095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.075397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.075690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.075699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.076023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.076323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.076332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.076708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.077045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.077054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.077367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.077679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.077689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.078026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.078354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.078362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.078651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.078947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.078955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.079276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.079610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.079618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.079746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.079881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:48.114 [2024-06-11 12:27:01.080006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.080012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.114 [2024-06-11 12:27:01.080015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.080027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.114 [2024-06-11 12:27:01.080036] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.114 [2024-06-11 12:27:01.080232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.080235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:48.114 [2024-06-11 12:27:01.080399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:48.114 [2024-06-11 12:27:01.080558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.080567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.114 [2024-06-11 12:27:01.080558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:48.114 qpair failed and we were unable to recover it. 00:32:48.114 [2024-06-11 12:27:01.080883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.114 [2024-06-11 12:27:01.080559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:48.115 [2024-06-11 12:27:01.081192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.081200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.081502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.081877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.081885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.082143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.082426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.082436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.082757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.083052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.083061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.083454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.083796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.083803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.084027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.084220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.084229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.084542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.084896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.084904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.085093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.085430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.085438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.085751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.085969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.085976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.086021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.086298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.086307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.086485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.086794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.086803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.087094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.087292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.087300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.087617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.087934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.087945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.088227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.088410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.088418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.088733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.089052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.089062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.089388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.089580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.089588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.089778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.089989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.089997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.090344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.090671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.090679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.090851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.091145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.091154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.091466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.091681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.091689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.091999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.092184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.092192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.092471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.092655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.092663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.092971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.093308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.093318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.093599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.093900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.093908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.094189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.094483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.094491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.094807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.094968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.094976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.095346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.095640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.095648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.095984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.096286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.096294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.115 [2024-06-11 12:27:01.096605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.096930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.115 [2024-06-11 12:27:01.096938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.115 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.097252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.097585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.097593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.097793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.097948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.097956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.098287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.098611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.098619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.098816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.099112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.099120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.099327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.099646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.099655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.099967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.100295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.100304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.100602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.100891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.100899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.101075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.101233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.101241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.101412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.101600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.101608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.101918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.102230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.102239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.102537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.102837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.102845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.103155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.103489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.103497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.103806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.104129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.104138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.104380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.104719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.104727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.105025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.105186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.105194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.105460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.105793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.105800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.106114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.106322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.106330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.106593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.106910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.106919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.107120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.107460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.107468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.107778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.107965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.107972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.108300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.108635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.108643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.108954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.109181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.109189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.109517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.109838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.109846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.110152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.110341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.110348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.110670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.111013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.111033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.111073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.111410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.111418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.111710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.111975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.111982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.116 [2024-06-11 12:27:01.112284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.112608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.116 [2024-06-11 12:27:01.112616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.116 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.112925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.113244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.113252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.113562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.113885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.113894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.114028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.114076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.114084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.114351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.114646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.114654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.114962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.115305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.115314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.115498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.115663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.115671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.115844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.116157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.116165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.116358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.116654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.116662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.116962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.117282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.117291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.117602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.117789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.117796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.118137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.118415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.118424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.118772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.118938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.118946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.119261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.119552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.119560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.119869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.120205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.120213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.120554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.120903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.120911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.121247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.121434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.121442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.121754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.122071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.122080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.122396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.122588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.117 [2024-06-11 12:27:01.122595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.117 qpair failed and we were unable to recover it. 00:32:48.117 [2024-06-11 12:27:01.122883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.123186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.123196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.123405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.123733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.123741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.124067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.124397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.124405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.124590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.124783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.124791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.124976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.125230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.125238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.125370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.125669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.125677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.125982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.126268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.126276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.126593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.126912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.126919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.127217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.127552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.127560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.127735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.128087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.128095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.128413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.128712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.128720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.389 qpair failed and we were unable to recover it. 00:32:48.389 [2024-06-11 12:27:01.129034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.389 [2024-06-11 12:27:01.129319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.129327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.129638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.129798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.129806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.130124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.130460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.130468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.130650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.130992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.131000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.131200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.131533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.131542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.131712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.132050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.132058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.132387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.132679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.132686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.133038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.133209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.133218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.133452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.133632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.133640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.133939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.134091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.134099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.134306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.134381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.134388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.134696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.135041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.135049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.135389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.135729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.135737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.136064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.136441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.136449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.136625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.136818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.136826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.137109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.137444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.137453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.137767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.138106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.138115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.138445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.138719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.138727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.139045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.139422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.139430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.139616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.139936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.139944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.140259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.140556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.140564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.140856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.141169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.141177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.141360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.141628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.141636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.141954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.142032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.142039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.142397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.142663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.142672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.142860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.143162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.143170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.143496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.143836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.143844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.144033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.144199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.144207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.390 [2024-06-11 12:27:01.144541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.144837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.390 [2024-06-11 12:27:01.144844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.390 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.145180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.145518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.145526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.145840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.146164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.146172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.146368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.146689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.146697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.146886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.147093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.147101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.147444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.147741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.147749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.147931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.148228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.148237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.148400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.148555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.148563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.148751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.149100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.149108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.149413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.149580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.149588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.149847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.150028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.150036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.150212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.150380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.150388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.150652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.150946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.150955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.151262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.151402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.151411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.151714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.152033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.152041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.152349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.152629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.152637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.152992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.153165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.153173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.153490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.153653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.153660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.153974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.154237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.154245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.154553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.154873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.154881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.155198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.155392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.155401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.155605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.155873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.155881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.156240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.156536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.156545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.156718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.156871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.156879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.157039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.157317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.157325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.157483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.157633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.157641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.157917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.158222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.158230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.158544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.158887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.158895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.159198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.159408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.159416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.391 qpair failed and we were unable to recover it. 00:32:48.391 [2024-06-11 12:27:01.159758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.391 [2024-06-11 12:27:01.159946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.159954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.160302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.160505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.160513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.160854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.161042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.161051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.162487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.162692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.162703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.163043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.163329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.163337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.163538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.163872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.163880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.164191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.164378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.164386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.164662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.164984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.164992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.165302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.165486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.165494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.165804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.166113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.166121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.166308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.166606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.166617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.166947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.167163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.167172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.167509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.167835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.167843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.168027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.168210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.168218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.168546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.168734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.168742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.168916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.169068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.169077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.169417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.169750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.169758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.169969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.170214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.170222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.170536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.170861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.170868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.171052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.171353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.171361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.171513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.171842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.171852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.172159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.172351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.172358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.172624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.172668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.172677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.173011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.173315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.173323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.173525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.173691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.173699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.173876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.174035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.174044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.174322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.174623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.174631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.174807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.174983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.174992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.175030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.175344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.392 [2024-06-11 12:27:01.175352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.392 qpair failed and we were unable to recover it. 00:32:48.392 [2024-06-11 12:27:01.175524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.175836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.175844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.176007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.176203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.176213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.176489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.176679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.176687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.176796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.176983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.176992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.177166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.177491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.177499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.177688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.177985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.177993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.178322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.178619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.178627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.178928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.179226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.179235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.179560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.179879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.179886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.179929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.180233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.180241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.180551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.180718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.180726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.181051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.181381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.181391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.181548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.181821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.181829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.182155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.182482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.182489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.182812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.182983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.182991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.183146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.183465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.183473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.183787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.184090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.184098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.184412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.184755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.184763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.185073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.185397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.185405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.185733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.185894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.185902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.186102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.186445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.186453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.186761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.187083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.187091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.187399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.187656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.187664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.188004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.188126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.188135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.188352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.188610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.188618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.188922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.189220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.189228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.189537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.189858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.189866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.190161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.190510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.190518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.393 qpair failed and we were unable to recover it. 00:32:48.393 [2024-06-11 12:27:01.190832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.393 [2024-06-11 12:27:01.191154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.191162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.191475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.191637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.191645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.191820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.192003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.192011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.192307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.192653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.192661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.192864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.193027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.193034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.193316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.193637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.193644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.193956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.194271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.194279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.194580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.194872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.194880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.194915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.195222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.195230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.195541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.195840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.195848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.196157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.196490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.196498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.196785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.197103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.197111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.197422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.197602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.197610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.197929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.198249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.198257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.198569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.198891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.198899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.199073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.199397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.199405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.199714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.199910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.199918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.200203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.200496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.200504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.200812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.201128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.201136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.201461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.201496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.201504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.201818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.201985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.201994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.202295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.202460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.202468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.202783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.202974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.202982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.394 qpair failed and we were unable to recover it. 00:32:48.394 [2024-06-11 12:27:01.203299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.394 [2024-06-11 12:27:01.203591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.203599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.203910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.204223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.204231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.204383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.204707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.204714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.205029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.205359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.205367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.205698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.206038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.206046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.206384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.206454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.206462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.206625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.206965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.206973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.207257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.207550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.207559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.207872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.208128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.208137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.208453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.208781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.208789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.209102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.209265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.209273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.209476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.209637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.209644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.209821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.210152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.210160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.210488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.210828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.210835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.211145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.211325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.211334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.211523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.211734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.211743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.212062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.212341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.212349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.212662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.212961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.212969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.213281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.213325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.213333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.213612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.213951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.213959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.214258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.214578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.214586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.214904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.215196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.215204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.215512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.215702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.215710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.216025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.216342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.216350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.216682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.216975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.216982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.217282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.217486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.217494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.217759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.218082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.218091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.218406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.218575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.218584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.218886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.219190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.395 [2024-06-11 12:27:01.219198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.395 qpair failed and we were unable to recover it. 00:32:48.395 [2024-06-11 12:27:01.219505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.219844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.219851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.220049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.220206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.220214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.220544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.220890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.220898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.221213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.221387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.221395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.221695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.221985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.221993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.222294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.222632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.222640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.222949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.223282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.223290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.223443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.223704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.223712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.223894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.223929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.223937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.224100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.224391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.224399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.224719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.224872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.224880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.225086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.225420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.225427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.225740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.225920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.225928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.226092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.226289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.226297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.226500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.226677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.226685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.226848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.227178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.227186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.227389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.227699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.227707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.228022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.228301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.228309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.228620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.228792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.228800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.229080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.229262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.229271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.229448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.229768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.229776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.229953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.230254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.230262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.230564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.230854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.230862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.231181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.231479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.231487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.231795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.231998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.232006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.232292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.232584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.232592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.232900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.233183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.233191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.233395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.233713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.233721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.396 qpair failed and we were unable to recover it. 00:32:48.396 [2024-06-11 12:27:01.233892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.396 [2024-06-11 12:27:01.234198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.234207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.234518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.234813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.234821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.235140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.235325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.235333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.235626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.235828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.235836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.235995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.236181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.236189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.236515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.236855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.236863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.237035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.237248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.237256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.237548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.237900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.237907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.238219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.238399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.238408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.238719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.239016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.239026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.239323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.239485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.239494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.239770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.240089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.240097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.240296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.240462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.240470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.240786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.241124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.241133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.241437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.241759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.241767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.242096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.242416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.242424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.242578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.242895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.242902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.243076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.243249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.243257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.243562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.243901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.243909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.244240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.244581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.244589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.244759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.245051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.245060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.245381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.245574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.245582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.245737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.245934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.245942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.246113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.246451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.246459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.246502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.246805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.246813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.247123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.247458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.247465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.247728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.247920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.247929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.248223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.248421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.248429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.248789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.249033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.397 [2024-06-11 12:27:01.249041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.397 qpair failed and we were unable to recover it. 00:32:48.397 [2024-06-11 12:27:01.249396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.249639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.249648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.249947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.250221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.250229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.250562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.250850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.250858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.251212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.251506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.251514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.251549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.251705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.251713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.252030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.252351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.252360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.252538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.252610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.252618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.252800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.252980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.252988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.253262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.253570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.253578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.253743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.254064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.254073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.254377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.254537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.254545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.254836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.255153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.255161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.255507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.255847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.255854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.256161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.256321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.256330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.256667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.256835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.256843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.257114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.257411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.257420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.257595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.257939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.257947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.258230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.258514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.258521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.258813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.259150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.259159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.259462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.259800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.259808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.260123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.260453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.260461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.260771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.261063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.261071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.261375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.261691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.261699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.262008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.262347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.262355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.398 qpair failed and we were unable to recover it. 00:32:48.398 [2024-06-11 12:27:01.262664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.262893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.398 [2024-06-11 12:27:01.262901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.263232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.263562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.263572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.263754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.263914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.263922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.264104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.264301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.264309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.264629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.264918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.264926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.265226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.265527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.265536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.265688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.266032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.266041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.266080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.266391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.266399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.266713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.267036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.267044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.267365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.267708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.267716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.268016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.268306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.268315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.268635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.268927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.268937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.269246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.269555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.269563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.269873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.270181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.270189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.270516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.270839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.270847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.271078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.271379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.271387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.271587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.271626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.271633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.271912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.272096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.272105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.272429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.272766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.272774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.273084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.273387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.273395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.273705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.274042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.274050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.274360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.274669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.274678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.275010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.275297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.275305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.275650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.275968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.275976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.276150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.276411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.276419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.276706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.276745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.276753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.276935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.277248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.277257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.277566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.277888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.277896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.278071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.278379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.278386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.399 qpair failed and we were unable to recover it. 00:32:48.399 [2024-06-11 12:27:01.278583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.399 [2024-06-11 12:27:01.278905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.278913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.279250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.279610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.279618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.279795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.280144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.280152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.280472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.280809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.280817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.281129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.281437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.281445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.281635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.281960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.281968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.282273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.282590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.282598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.282886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.283226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.283235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.283555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.283595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.283603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.283926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.284275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.284283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.284601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.284776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.284784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.285095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.285445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.285454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.285637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.285941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.285949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.286267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.286305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.286313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.286512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.286826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.286834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.287147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.287465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.287473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.287783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.287973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.287981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.288262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.288423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.288431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.288596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.288938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.288946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.289275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.289603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.289611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.289921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.289954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.289961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.290244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.290541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.290549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.290901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.291189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.291197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.291492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.291837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.291845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.292236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.292530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.292538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.292823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.293141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.293150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.293475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.293816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.293824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.294238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.294391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.294399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.400 [2024-06-11 12:27:01.294626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.294922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.400 [2024-06-11 12:27:01.294930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.400 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.295237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.295576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.295584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.295887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.296280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.296288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.296509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.296805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.296813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.296996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.297037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.297045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.297340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.297631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.297639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.297798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.298079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.298087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.298398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.298718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.298726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.299040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.299312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.299320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.299653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.299968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.299976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.300209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.300504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.300512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.300821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.301126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.301134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.301321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.301638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.301646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.301945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.302264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.302272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.302589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.302867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.302875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.303059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.303405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.303412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.303585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.303927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.303935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.304248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.304477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.304486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.304657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.304958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.304966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.305223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.305364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.305372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.305521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.305768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.305776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.306103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.306390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.306398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.306548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.306869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.306877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.307190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.307376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.307383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.307700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.308003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.308011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.308240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.308558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.308566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.308747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.309084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.309092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.309254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.309542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.309550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.309744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.309945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.309953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.401 qpair failed and we were unable to recover it. 00:32:48.401 [2024-06-11 12:27:01.310262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.401 [2024-06-11 12:27:01.310584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.310592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.310788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.311089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.311097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.311267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.311421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.311429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.311588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.311928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.311936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.312235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.312415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.312423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.312751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.313089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.313097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.313490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.313652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.313660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.313963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.314011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.314021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.314312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.314493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.314501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.314788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.315128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.315136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.315440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.315759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.315767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.315933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.316085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.316093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.316258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.316429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.316437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.316632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.316849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.316857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.317077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.317370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.317378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.317683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.317840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.317849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.318124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.318400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.318408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.318727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.319021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.319030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.319345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.319681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.319689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.320004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.320298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.320307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.320349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.320659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.320667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.320975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.321309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.321317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.321496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.321670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.321678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.321852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.322166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.322174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.322332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.322645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.322653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.322961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.323285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.323294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.323645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.323935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.323943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.324256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.324573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.324581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.324735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.324997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.402 [2024-06-11 12:27:01.325005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.402 qpair failed and we were unable to recover it. 00:32:48.402 [2024-06-11 12:27:01.325311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.325630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.325637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.325960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.326285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.326293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.326602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.326892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.326899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.327065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.327349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.327358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.327672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.328009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.328021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.328341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.328637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.328644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.328915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.329185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.329193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.329497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.329820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.329828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.330189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.330480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.330488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.330660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.330846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.330855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.331012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.331206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.331214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.331393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.331754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.331762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.332074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.332353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.332361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.332394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.332679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.332687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.332996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.333187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.333196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.333530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.333866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.333874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.334187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.334372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.334381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.334706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.335061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.335072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.335377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.335678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.335686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.335985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.336166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.336174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.336559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.336901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.336909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.337229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.337566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.337574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.337754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.338093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.338101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.338397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.338576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.338584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.403 qpair failed and we were unable to recover it. 00:32:48.403 [2024-06-11 12:27:01.338749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.403 [2024-06-11 12:27:01.339062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.339070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.339232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.339505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.339513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.339824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.339991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.339999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.340310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.340602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.340611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.340961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.341286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.341295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.341597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.341939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.341947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.342132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.342317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.342325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.342590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.342911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.342919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.343113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.343420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.343428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.343741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.343952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.343959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.344239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.344425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.344433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.344732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.345049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.345057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.345374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.345703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.345711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.346025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.346331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.346341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.346652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.346997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.347004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.347355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.347704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.347712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.347986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.348275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.348284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.348596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.348939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.348947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.349120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.349431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.349439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.349767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.349965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.349973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.350165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.350357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.350365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.350546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.350720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.350728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.350948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.351287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.351295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.351600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.351918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.351927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.352243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.352567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.352575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.352840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.353169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.353177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.353498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.353839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.353847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.354154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.354490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.354498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.354814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.355154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.404 [2024-06-11 12:27:01.355162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.404 qpair failed and we were unable to recover it. 00:32:48.404 [2024-06-11 12:27:01.355387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.355705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.355713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.356028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.356344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.356352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.356708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.357025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.357033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.357195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.357501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.357508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.357550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.357873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.357881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.358202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.358466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.358473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.358766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.359059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.359068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.359243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.359543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.359551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.359833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.360143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.360151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.360344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.360551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.360559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.360874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.361076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.361085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.361471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.361789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.361797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.362130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.362326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.362334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.362648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.362965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.362973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.363166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.363351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.363359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.363660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.363962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.363970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.364126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.364347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.364355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.364665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.364990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.364999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.365174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.365476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.365484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.365801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.366118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.366126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.366436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.366603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.366612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.366920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.367184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.367193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.367523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.367848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.367856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.368179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.368480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.368488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.368649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.368904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.368912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.369241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.369593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.369602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.369893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.370098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.370107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.370416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.370576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.370584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.405 qpair failed and we were unable to recover it. 00:32:48.405 [2024-06-11 12:27:01.370898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.371219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.405 [2024-06-11 12:27:01.371227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.371542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.371887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.371895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.372140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.372478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.372486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.372795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.373023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.373032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.373396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.373639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.373647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.373981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.374128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.374136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.374293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.374542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.374550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.374870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.375187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.375195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.375372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.375571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.375579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.375914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.376220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.376228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.376559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.376738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.376746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.377052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.377388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.377396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.377707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.378030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.378039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.378251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.378574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.378581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.378620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.378894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.378902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.379210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.379551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.379559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.379869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.380205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.380214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.380522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.380861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.380869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.381206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.381513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.381521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.381719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.382044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.382053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.382233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.382529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.382537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.382709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.382935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.382943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.383278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.383578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.383587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.383898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.384084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.384092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.384433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.384618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.384626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.384826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.384860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.384868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.385250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.385566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.385574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.385884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.386108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.386117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.386302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.386620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.386628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.406 qpair failed and we were unable to recover it. 00:32:48.406 [2024-06-11 12:27:01.386936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.406 [2024-06-11 12:27:01.387095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.387103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.387373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.387695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.387703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.388012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.388352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.388359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.388678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.388864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.388873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.389186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.389527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.389535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.389864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.390175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.390183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.390219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.390376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.390384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.390665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.390698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.390706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.390878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.391183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.391191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.391489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.391807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.391815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.392168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.392325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.392334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.392645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.392959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.392967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.393159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.393362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.393370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.393526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.393701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.393709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.393879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.394168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.394176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.394542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.394686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.394694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.394735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.395016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.395029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.395193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.395471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.395479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.395800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.396121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.396130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.396168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.396468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.396476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.396665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.396984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.396992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.397289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.397479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.397488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.397528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.397814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.397822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.398126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.398449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.398457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.398759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.399084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.399092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.407 qpair failed and we were unable to recover it. 00:32:48.407 [2024-06-11 12:27:01.399289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.399571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.407 [2024-06-11 12:27:01.399579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.399892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.400080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.400089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.400283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.400480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.400488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.400768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.400962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.400970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.401259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.401565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.401572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.401893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.402214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.402222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.402553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.402724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.402733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.402921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.403197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.403205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.403543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.403729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.403737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.403903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.404105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.404113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.404432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.404771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.404779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.405086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.405275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.405284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.405616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.405943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.405951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.406285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.406630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.406638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.406941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.407102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.407110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.407377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.407674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.407681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.408022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.408087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.408095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.408393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.408762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.408769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.409124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.409466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.409474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.409858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.410049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.410058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.410369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.410693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.410701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.410877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.411060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.411069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.411394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.411734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.411742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.412097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.412371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.412379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.412572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.412867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.408 [2024-06-11 12:27:01.412875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.408 qpair failed and we were unable to recover it. 00:32:48.408 [2024-06-11 12:27:01.413182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.413522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.413532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.681 [2024-06-11 12:27:01.413886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.414169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.414177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.681 [2024-06-11 12:27:01.414365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.414691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.414698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.681 [2024-06-11 12:27:01.414886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.415083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.415092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.681 [2024-06-11 12:27:01.415417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.415470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.415478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.681 [2024-06-11 12:27:01.415773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.416090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.416098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.681 [2024-06-11 12:27:01.416412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.416585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.416593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.681 [2024-06-11 12:27:01.416942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.417299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.681 [2024-06-11 12:27:01.417307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.681 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.417615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.417930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.417938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.418275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.418446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.418453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.418754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.419089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.419097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.419253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.419536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.419544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.419700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.420041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.420049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.420242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.420574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.420582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.420977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.421240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.421249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.421411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.421747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.421755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.421918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.422196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.422205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.422521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.422703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.422711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.422863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.423186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.423195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.423487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.423780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.423788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.424107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.424412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.424421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.424590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.424749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.424758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.425077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.425405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.425413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.425704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.425874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.425883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.426063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.426406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.426415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.426722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.427042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.427051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.427371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.427544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.427552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.427867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.428036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.428045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.428368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.428710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.428720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.429029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.429189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.429197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.429516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.429837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.429845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.430160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.430315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.430324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.430600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.430757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.430766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.431077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.431387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.431395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.431606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.431882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.431890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.432185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.432367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.432375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.682 qpair failed and we were unable to recover it. 00:32:48.682 [2024-06-11 12:27:01.432653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.682 [2024-06-11 12:27:01.432926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.432934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.433124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.433433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.433441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.433752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.434077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.434088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.434432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.434760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.434769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.434986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.435273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.435282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.435599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.435926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.435934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.436101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.436300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.436309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.436622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.436926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.436934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.437267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.437589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.437597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.437750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.438072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.438080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.438255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.438475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.438483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.438673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.438836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.438844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.439016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.439337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.439349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.439663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.439844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.439853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.440187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.440534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.440542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.440878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.441224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.441232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.441405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.441764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.441772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.441956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.442260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.442269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.442585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.442935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.442943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.443178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.443216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.443224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.443523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.443692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.443700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.444008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.444192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.444201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.444390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.444658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.444665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.444976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.445311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.445320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.445357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.445514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.445522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.445630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.445892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.445900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.446178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.446424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.446432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.446763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.447093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.447102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.683 [2024-06-11 12:27:01.447445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.447746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.683 [2024-06-11 12:27:01.447754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.683 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.448081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.448369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.448377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.448786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.449117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.449125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.449466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.449801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.449810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.450110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.450329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.450337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.450498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.450681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.450688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.450939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.451221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.451229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.451535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.451860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.451867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.452187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.452488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.452496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.452806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.453153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.453161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.453472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.453801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.453809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.454155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.454503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.454510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.454858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.455180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.455188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.455373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.455699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.455707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.456024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.456318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.456326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.456630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.456969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.456978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.457149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.457482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.457490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.457798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.458122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.458130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.458361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.458394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.458400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.458744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.458930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.458938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.459253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.459592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.459600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.459909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.460188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.460196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.460359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.460681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.460689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.461023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.461341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.461350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.461702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.461913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.461921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.462260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.462427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.462435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.462625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.462949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.462957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.463262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.463305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.463313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.463467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.463809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.463817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.684 qpair failed and we were unable to recover it. 00:32:48.684 [2024-06-11 12:27:01.464130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.684 [2024-06-11 12:27:01.464325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.464332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.464636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.464803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.464812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.465183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.465496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.465504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.465819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.466149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.466157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.466465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.466785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.466792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.466953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.467142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.467150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.467313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.467589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.467597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.467907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.468231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.468239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.468566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.468749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.468757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.469075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.469402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.469409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.469566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.469894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.469902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.470235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.470561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.470569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.470777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.470940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.470948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.471131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.471444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.471452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.471781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.471814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.471821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.472003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.472283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.472291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.472603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.472794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.472802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.473096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.473287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.473295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.473479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.473645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.473652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.473810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.473985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.473993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.474306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.474605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.474613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.474932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.475260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.475269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.475485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.475767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.475775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.476086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.476418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.476426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.476600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.476913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.476921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.477109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.477271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.477278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.477581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.477876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.477883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.478186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.478520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.478527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.478840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.479178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.685 [2024-06-11 12:27:01.479187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.685 qpair failed and we were unable to recover it. 00:32:48.685 [2024-06-11 12:27:01.479409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.479704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.479711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.480041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.480354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.480362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.480514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.480846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.480854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.481167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.481474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.481483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.481795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.482084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.482092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.482387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.482725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.482733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.482906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.483267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.483276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.483593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.483921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.483929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.484088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.484403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.484411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.484727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.484909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.484918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.485109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.485390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.485398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.485645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.485907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.485915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.486138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.486474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.486482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.486697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.487006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.487014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.487321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.487618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.487626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.487789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.488092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.488100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.488421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.488602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.488610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.488907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.489201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.489210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.489370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.489647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.489655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.489966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.490149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.490157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.490467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.490692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.490700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.491043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.491238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.491245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.491426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.491616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.491623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.686 qpair failed and we were unable to recover it. 00:32:48.686 [2024-06-11 12:27:01.491920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.686 [2024-06-11 12:27:01.492120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.492128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.492461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.492779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.492787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.493081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.493390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.493398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.493715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.494039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.494048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.494356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.494640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.494647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.494962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.495146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.495154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.495472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.495514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.495522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.495691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.495993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.496001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.496308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.496626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.496633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.496798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.496989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.496997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.497155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.497466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.497474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.497671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.497930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.497939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.498251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.498556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.498563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.498782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.499020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.499028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.499360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.499551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.499560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.499745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.500107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.500116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.500434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.500773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.500781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.500957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.501231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.501239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.501536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.501866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.501874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.502113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.502152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.502160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.502475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.502701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.502708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.502762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.502917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.502925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.503103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.503278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.503286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.503595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.503884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.503892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.504196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.504536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.504544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.504842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.505173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.505181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.505536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.505725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.505733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.506058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.506384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.506392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.687 [2024-06-11 12:27:01.506701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.506998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.687 [2024-06-11 12:27:01.507006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.687 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.507342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.507663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.507671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.507843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.508164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.508172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.508431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.508753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.508762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.509075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.509253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.509261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.509571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.509892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.509901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.510245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.510588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.510598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.510771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.511113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.511121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.511475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.511806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.511814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.512012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.512170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.512178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.512343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.512659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.512666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.512978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.513270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.513278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.513580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.513900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.513908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.514090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.514405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.514413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.514713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.515046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.515054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.515186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.515484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.515493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.515807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.516090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.516100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.516438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.516735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.516743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.517087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.517394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.517403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.517718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.518009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.518022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.518347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.518659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.518667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.518960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.519130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.519138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.519318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.519637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.519644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.519948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.520140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.520148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.520480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.520764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.520772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.521111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.521426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.521434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.521635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.521801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.521811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.522132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.522485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.522493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.522810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.523138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.523146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.688 qpair failed and we were unable to recover it. 00:32:48.688 [2024-06-11 12:27:01.523499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.688 [2024-06-11 12:27:01.523800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.523809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.524147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.524495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.524503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.524676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.524970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.524979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.525164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.525497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.525505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.525685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.525958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.525965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.526274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.526451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.526460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.526606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.526950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.526958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.527209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.527395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.527405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.527705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.528028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.528037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.528221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.528416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.528424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.528622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.528814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.528823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.529002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.529185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.529193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.529397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.529563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.529570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.529837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.530157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.530166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.530336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.530676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.530686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.530865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.531176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.531184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.531498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.531705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.531714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.532023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.532185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.532193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.532370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.532659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.532667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.532997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.533331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.533339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.533671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.533967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.533975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.534009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.534324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.534332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.534645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.534838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.534845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.535147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.535478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.535485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.535784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.535827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.535835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.536115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.536437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.536445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.536608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.536907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.536915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.537086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.537269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.537277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.537435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.537756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.689 [2024-06-11 12:27:01.537764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.689 qpair failed and we were unable to recover it. 00:32:48.689 [2024-06-11 12:27:01.538060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.538394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.538402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.538716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.539007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.539015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.539125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.539303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.539312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.539501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.539831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.539839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.540023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.540347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.540355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.540691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.540985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.540993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.541172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.541486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.541494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.541829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.542177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.542186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.542493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.542835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.542843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.543157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.543460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.543468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.543778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.543967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.543975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.544260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.544596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.544604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.544933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.545245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.545253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.545562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.545726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.545734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.545930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.546207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.546215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.546538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.546844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.546852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.547191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.547521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.547529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.547851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.548179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.548187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.548510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.548860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.548868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.549044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.549229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.549237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.549550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.549864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.549872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.550180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.550484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.550492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.550792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.550832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.550841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.551024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.551297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.551305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.690 qpair failed and we were unable to recover it. 00:32:48.690 [2024-06-11 12:27:01.551490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.690 [2024-06-11 12:27:01.551528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.551535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.551820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.552152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.552160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.552469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.552804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.552812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.553129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.553310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.553319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.553628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.553797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.553805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.554102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.554206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.554215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.554489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.554687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.554696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.554996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.555284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.555292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.555590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.555759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.555767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.556030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.556199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.556208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.556248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.556537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.556544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.556699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.556991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.556999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.557329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.557651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.557659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.557832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.558178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.558187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.558479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.558819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.558827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.559014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.559333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.559342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.559507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.559692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.559700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.559897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.560185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.560194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.560523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.560577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.560585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.560808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.560990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.560998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.561293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.561634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.561643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.561952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.562243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.562252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.562564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.562754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.562762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.563098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.563415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.563423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.563605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.563763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.563771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.563959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.564258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.564266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.564537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.564816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.564824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.564980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.565252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.565261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.565555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.565860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.691 [2024-06-11 12:27:01.565869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.691 qpair failed and we were unable to recover it. 00:32:48.691 [2024-06-11 12:27:01.566188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.566412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.566419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.566730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.566914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.566922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.567085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.567352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.567360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.567704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.567861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.567868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.568047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.568357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.568365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.568532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.568684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.568692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.568997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.569307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.569316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.569599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.569840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.569848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.570113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.570261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.570269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.570537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.570829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.570837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.571030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.571315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.571322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.571654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.571978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.571986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.572194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.572466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.572474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.572779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.573086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.573094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.573267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.573531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.573539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.573872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.574174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.574183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.574504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.574714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.574722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.575053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.575243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.575250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.575541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.575851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.575859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.576043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.576200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.576208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.576388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.576725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.576733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.577042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.577227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.577235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.577549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.577728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.577736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.578032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.578378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.578386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.578582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.578724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.578732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.578951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.579265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.579274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.579567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.579875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.579883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.580046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.580201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.580209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.692 qpair failed and we were unable to recover it. 00:32:48.692 [2024-06-11 12:27:01.580516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.692 [2024-06-11 12:27:01.580829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.580837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.581149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.581445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.581453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.581607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.581774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.581782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.582109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.582464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.582472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.582791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.583107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.583115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.583422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.583740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.583748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.583899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.584175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.584184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.584494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.584714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.584722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.584895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.585208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.585216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.585579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.585912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.585921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.586187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.586496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.586504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.586833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.586994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.587002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.587300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.587619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.587626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.587936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.588221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.588230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.588539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.588865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.588872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.589158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.589327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.589335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.589600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.589918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.589926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.590062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.590253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.590262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.590586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.590929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.590938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.591153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.591475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.591484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.591800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.591982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.591990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.592255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.592586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.592594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.592934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.593242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.593251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.593539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.593737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.593745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.594054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.594376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.594384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.594693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.594858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.594866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.595184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.595499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.595507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.595842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.595879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.595885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.596197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.596533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.693 [2024-06-11 12:27:01.596542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.693 qpair failed and we were unable to recover it. 00:32:48.693 [2024-06-11 12:27:01.596724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.597016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.597032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.597318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.597656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.597665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.598007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.598191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.598199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.598463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.598804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.598812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.599151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.599481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.599489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.599781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.599976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.599984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.600320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.600635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.600643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.600959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.601118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.601127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.601405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.601600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.601607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.601789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.602088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.602100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.602384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.602524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.602532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.602804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.602842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.602848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.603118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.603289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.603297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.603643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.603963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.603971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.604148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.604302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.604310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.604582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.604876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.604884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.605059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.605391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.605399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.605600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.605776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.605784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.605945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.606100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.606109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.606289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.606331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.606340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.606657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.606941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.606949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.607238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.607420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.607427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.607580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.607862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.607869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.608186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.608512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.608520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.608828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.609147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.609155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.609470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.609763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.609771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.610070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.610267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.610274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.610541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.610858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.610867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.611060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.611380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.694 [2024-06-11 12:27:01.611388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.694 qpair failed and we were unable to recover it. 00:32:48.694 [2024-06-11 12:27:01.611720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.612043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.612051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.612369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.612671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.612679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.612995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.613168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.613177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.613517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.613838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.613846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.614155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.614491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.614499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.614797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.614998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.615005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.615167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.615487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.615495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.615677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.615967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.615975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.616278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.616604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.616611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.616927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.617300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.617308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.617612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.617956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.617964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.618284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.618449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.618456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.618772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.619113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.619122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.619425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.619762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.619770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.619951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.620128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.620137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.620330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.620671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.620679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.620976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.621166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.621173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.621479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.621549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.621556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.621713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.622021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.622030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.622343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.622677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.622685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.623028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.623315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.623323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.623654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.623815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.623823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.624100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.624432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.624440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.624744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.625023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.625032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.695 qpair failed and we were unable to recover it. 00:32:48.695 [2024-06-11 12:27:01.625352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.625691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.695 [2024-06-11 12:27:01.625699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.625995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.626326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.626334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.626510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.626849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.626857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.627167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.627461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.627470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.627764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.627811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.627817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.628090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.628479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.628487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.628800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.629122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.629131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.629302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.629376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.629384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.629683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.629990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.629997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.630291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.630611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.630619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.630917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.631103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.631111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.631445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.631633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.631641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.631942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.632226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.632234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.632409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.632693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.632701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.633012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.633310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.633318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.633617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.633782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.633791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.634089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.634384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.634392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.634726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.635022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.635030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.635396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.635680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.635687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.635870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.636180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.636187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.636525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.636847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.636854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.637150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.637407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.637415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.637727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.638065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.638074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.638388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.638711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.638719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.639090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.639405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.639414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.639453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.639767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.639774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.640076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.640221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.640228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.640431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.640726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.640734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.641015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.641304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.641313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.696 qpair failed and we were unable to recover it. 00:32:48.696 [2024-06-11 12:27:01.641610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.696 [2024-06-11 12:27:01.641931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.641939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.642230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.642546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.642554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.642867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.643192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.643200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.643536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.643856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.643864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.644161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.644461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.644469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.644776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.645092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.645100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.645402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.645709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.645718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.646010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.646299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.646307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.646602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.646795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.646802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.646985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.647280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.647288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.647610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.647648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.647655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.648020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.648200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.648207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.648506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.648836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.648845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.649140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.649485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.649494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.649647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.649963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.649972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.650154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.650472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.650481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.650787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.651118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.651126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.651450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.651742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.651750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.652060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.652404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.652412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.652715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.652890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.652898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.653190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.653486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.653494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.653808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.654098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.654106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.654433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.654591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.654599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.654754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.654910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.654918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.655216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.655534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.655543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.655837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.656171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.656179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.656389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.656704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.656712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.657008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.657302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.657310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.657607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.657928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.697 [2024-06-11 12:27:01.657937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.697 qpair failed and we were unable to recover it. 00:32:48.697 [2024-06-11 12:27:01.658103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.658426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.658434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.658572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.658854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.658862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.659051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.659108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.659115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.659382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.659674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.659682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.659994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.660288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.660297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.660455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.660619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.660627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.660813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.660855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.660863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.661070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.661243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.661250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.661561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.661735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.661743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.662089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.662437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.662445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.662631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.662926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.662934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.663201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.663507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.663515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.663831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.664033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.664041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.664356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.664399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.664405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.664707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.665029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.665038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.665224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.665555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.665563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.665885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.666093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.666102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.666297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.666475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.666484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.666817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.667011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.667031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.667344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.667535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.667544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.667705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.667860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.667868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.668129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.668322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.668330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.668678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.669024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.669033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.669340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.669541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.669549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.669850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.670206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.670215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.670433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.670733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.670742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.671037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.671327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.671335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.671493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.671834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.671843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.698 qpair failed and we were unable to recover it. 00:32:48.698 [2024-06-11 12:27:01.672023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.672323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.698 [2024-06-11 12:27:01.672331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.672643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.672988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.672996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.673351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.673606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.673613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.673935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.674236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.674244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.674562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.674747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.674755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.675079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.675401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.675409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.675751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.676004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.676012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.676318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.676504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.676512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.676832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.677151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.677159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.677462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.677779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.677787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.678085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.678418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.678427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.678760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.679104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.679114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.679437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.679766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.679775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.680064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.680393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.680402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.680694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.680874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.680882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.681087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.681256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.681265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.681304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.681599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.681607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.681940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.682084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.682093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.682400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.682722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.682731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.683033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.683195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.683203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.683483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.683796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.683804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.684121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.684326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.684337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.684713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.685053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.685062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.685404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.685566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.685575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.685901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.686289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.686298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.686612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.686962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.686970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.687308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.687611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.687619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.687777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.688040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.688050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.699 qpair failed and we were unable to recover it. 00:32:48.699 [2024-06-11 12:27:01.688233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.688576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.699 [2024-06-11 12:27:01.688584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.688894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.689184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.689192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.689501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.689795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.689803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.689974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.690211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.690221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.690525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.690701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.690709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.690966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.691266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.691274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.691593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.691906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.691914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.692233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.692459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.692467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.692782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.693078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.693086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.693410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.693732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.693741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.694046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.694323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.694331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.694673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.694981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.694989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.695168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.695342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.695350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.695669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.695891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.695901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.696073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.696388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.696396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.696718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.697063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.697071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.697301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.697599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.697606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.697804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.698144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.698152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.698460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.698784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.698791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.698983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.699197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.699206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.699558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.699599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.699605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.699881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.700186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.700194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.700516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.700826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.700835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.701034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.701221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.701228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.701541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.701873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.701880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.702105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.702435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.702443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.700 qpair failed and we were unable to recover it. 00:32:48.700 [2024-06-11 12:27:01.702636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.700 [2024-06-11 12:27:01.702959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.702967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.701 qpair failed and we were unable to recover it. 00:32:48.701 [2024-06-11 12:27:01.703281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.703587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.703596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.701 qpair failed and we were unable to recover it. 00:32:48.701 [2024-06-11 12:27:01.703906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.704067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.704075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.701 qpair failed and we were unable to recover it. 00:32:48.701 [2024-06-11 12:27:01.704279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.704635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.704642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.701 qpair failed and we were unable to recover it. 00:32:48.701 [2024-06-11 12:27:01.704953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.705276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.701 [2024-06-11 12:27:01.705284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.701 qpair failed and we were unable to recover it. 00:32:48.701 [2024-06-11 12:27:01.705574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.705893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.705901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.706191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.706525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.706532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.706686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.706967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.706976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.707325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.707619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.707628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.707669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.707977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.707986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.708304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.708642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.708650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.708831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.709181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.709189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.709530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.709881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.709890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.710236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.710572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.710580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.710894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.711080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.711088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.711275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.711556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.711564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.711879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.712038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.712047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.712261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.712585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.712593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.712653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.712942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.712951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.713270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.713589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.713598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.713913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.714098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.714106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.714278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.714598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.714607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.714804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.715039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.715048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.715305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.715637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.715645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.715827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.716010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.716023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.716316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.716658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.716666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.716848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.717132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.717140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.717458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.717620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.717628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.967 qpair failed and we were unable to recover it. 00:32:48.967 [2024-06-11 12:27:01.717900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.967 [2024-06-11 12:27:01.718041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.718049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.718200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.718387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.718394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.718700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.718898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.718906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.719068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.719259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.719267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.719551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.719875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.719883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.720219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.720592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.720599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.720909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.721186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.721197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.721518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.721843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.721851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.722166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.722475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.722484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.722644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.722827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.722836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.722995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.723303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.723311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.723506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.723824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.723832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.724046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.724328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.724335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.724649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.724818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.724827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.725095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.725148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.725154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.725360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.725527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.725535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 12:27:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:48.968 [2024-06-11 12:27:01.725746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 12:27:01 -- common/autotest_common.sh@852 -- # return 0 00:32:48.968 [2024-06-11 12:27:01.726057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.726066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.726144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 12:27:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:48.968 [2024-06-11 12:27:01.726408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.726416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 12:27:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:48.968 [2024-06-11 12:27:01.726742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.968 [2024-06-11 12:27:01.727062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.727071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.727385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.727716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.727724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.727878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.728169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.728178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.728485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.728773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.728782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.728952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.729048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.729055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.729348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.729512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.729519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.729708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.729972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.729982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.730201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.730510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.730518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.730820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.731127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.731135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.968 qpair failed and we were unable to recover it. 00:32:48.968 [2024-06-11 12:27:01.731453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.968 [2024-06-11 12:27:01.731741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.731749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.732062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.732403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.732411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.732709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.733031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.733039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.733346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.733680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.733688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.733862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.734237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.734246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.734424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.734763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.734772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.734943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.735109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.735116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.735420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.735604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.735613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.735925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.736236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.736244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.736442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.736738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.736746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.737085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.737402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.737410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.737734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.737901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.737909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.738175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.738479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.738489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.738808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.739003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.739011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.739212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.739392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.739402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.739768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.740068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.740077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.740299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.740466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.740475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.740671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.740958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.740965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.741355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.741535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.741542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.741856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.742169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.742178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.742243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.742542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.742551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.742863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.743188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.743197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.743356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.743673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.743682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.743726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.744056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.744065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.744262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.744587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.744596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.744917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.744986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.744992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.745206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.745453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.745461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.745633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.745962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.745970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.969 [2024-06-11 12:27:01.746171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.746371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.969 [2024-06-11 12:27:01.746379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.969 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.746572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.746912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.746920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.747246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.747597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.747605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.747791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.748103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.748111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.748261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.748526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.748536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.748848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.749029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.749037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.749210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.749389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.749396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.749714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.750058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.750067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.750397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.750565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.750573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.750719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.750904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.750912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.751067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.751406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.751413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.751728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.752069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.752078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.752410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.752738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.752746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.752922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.753232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.753240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.753413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.753718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.753727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.753934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.754234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.754244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.754555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.754761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.754768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.754964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.755318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.755326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.755622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.755967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.755975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.756272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.756598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.756606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.756873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.757185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.757193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.757348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.757626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.757634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.757974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.758261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.758269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.758481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.758774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.758782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.758982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.759065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.759073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.759408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.759727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.759735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.760068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.760393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.760402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.760717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.761010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.761020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.761347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.761668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.761677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.970 qpair failed and we were unable to recover it. 00:32:48.970 [2024-06-11 12:27:01.761989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.970 [2024-06-11 12:27:01.762320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.762329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.762567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.762757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.762765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.762945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.763237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.763246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.763592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.763785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.763794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.764091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.764431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.764440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.764717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.765008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.765019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.765208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 12:27:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.971 [2024-06-11 12:27:01.765548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.765559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 12:27:01 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:48.971 [2024-06-11 12:27:01.765741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.765907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.765914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 12:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.971 [2024-06-11 12:27:01.766215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.971 [2024-06-11 12:27:01.766560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.766570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.766902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.767183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.767191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.767510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.767805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.767812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.768047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.768357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.768365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.768537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.768878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.768886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.769218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.769553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.769560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.769887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.769931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.769938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.770271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.770600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.770608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.770920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.771185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.771193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.771522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.771694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.771701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.772002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.772319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.772327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.772641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.772967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.772974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.773294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.773621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.773629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.773824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.774141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.774149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.971 [2024-06-11 12:27:01.774532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.774673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.971 [2024-06-11 12:27:01.774681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.971 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.774955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.775160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.775168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.775477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.775657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.775666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.775763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.776052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.776060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.776235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.776521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.776529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.776698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.776974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.776982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.777300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.777621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.777630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.777674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.777836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.777843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.778031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.778325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.778333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.778649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.778971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.778979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.779294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.779605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.779612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.779917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.780128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.780136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.780452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.780754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.780761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 Malloc0 00:32:48.972 [2024-06-11 12:27:01.781077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.781117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.781124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.781472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 12:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.972 [2024-06-11 12:27:01.781815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.781824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 12:27:01 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:48.972 [2024-06-11 12:27:01.782135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 12:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.972 [2024-06-11 12:27:01.782308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.782316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.972 [2024-06-11 12:27:01.782628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.782942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.782950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.783266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.783583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.783591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.783905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.784115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.784123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.784464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.784838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.784845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.785156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.785475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.785483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.785787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.786097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.786105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.786139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.786294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.786304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.786605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.786948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.786956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.787270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.787588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.787595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.787910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.788236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.788244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.788300] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.972 [2024-06-11 12:27:01.788456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.788718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.788726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.788909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.789067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.789075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.972 qpair failed and we were unable to recover it. 00:32:48.972 [2024-06-11 12:27:01.789352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.972 [2024-06-11 12:27:01.789532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.789539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.789873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.790202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.790210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.790520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.790681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.790689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.791008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.791330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.791339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.791663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.791979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.791987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.792242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.792404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.792412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.792727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.792909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.792916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.793237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.793555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.793563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.793919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.794243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.794252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.794564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.794920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.794928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.795136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.795296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.795304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.795617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.795943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.795951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.796161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.796514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.796522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.796714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.797048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.797057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 12:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.973 [2024-06-11 12:27:01.797399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 12:27:01 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.973 [2024-06-11 12:27:01.797750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.797758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 12:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.973 [2024-06-11 12:27:01.798100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.798141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.798147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.973 [2024-06-11 12:27:01.798418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.798672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.798680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.798998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.799163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.799172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.799478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.799869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.799876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.800166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.800381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.800388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.800573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.800954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.800963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.801010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.801286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.801294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.801603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.801896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.801904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.802245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.802538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.802546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.802721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.802916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.802924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.803230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.803431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.803438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.803593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.803887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.803896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.804229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.804565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.804573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.973 qpair failed and we were unable to recover it. 00:32:48.973 [2024-06-11 12:27:01.804883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.973 [2024-06-11 12:27:01.805066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.805075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.805119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.805405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.805413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.805727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.806051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.806059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.806449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.806791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.806799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.807082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.807412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.807419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.807706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.807998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.808006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.808320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.808639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.808646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.808958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.809263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.809271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 12:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.974 [2024-06-11 12:27:01.809576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 12:27:01 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.974 [2024-06-11 12:27:01.809868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.809876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 12:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.974 [2024-06-11 12:27:01.810077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.974 [2024-06-11 12:27:01.810384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.810393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.810788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.811037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.811045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.811363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.811523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.811530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.811801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.811998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.812005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.812353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.812520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.812528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.812844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.813026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.813034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.813350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.813545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.813552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.813708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.813980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.813987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.814303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.814623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.814632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.814971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.815253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.815262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.815554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.815848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.815856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.816170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.816502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.816509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.816816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.817148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.817156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.817484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.817666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.817674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.818004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.818184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.818192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.818389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.818679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.818686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.818872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.819252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.819260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.819565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.819881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.819889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.974 qpair failed and we were unable to recover it. 00:32:48.974 [2024-06-11 12:27:01.820069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.820367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.974 [2024-06-11 12:27:01.820375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.820689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.820874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.820883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.821188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.821377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 12:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.975 [2024-06-11 12:27:01.821384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.821562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.821768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.821776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 12:27:01 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.975 [2024-06-11 12:27:01.822075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 12:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.975 [2024-06-11 12:27:01.822311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.822319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.975 [2024-06-11 12:27:01.822484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.822645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.822653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.823121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.823311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.823318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.823634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.823927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.823936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.824113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.824387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.824395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.824576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.824881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.824889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.825075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.825452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.825460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.825772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.826109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.826117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.826302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.826459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.826466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.826792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.827126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.827134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.827443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.827762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.827770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.828084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.828402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.975 [2024-06-11 12:27:01.828409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd358000b90 with addr=10.0.0.2, port=4420 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.828553] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.975 12:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.975 12:27:01 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:48.975 12:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.975 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:32:48.975 [2024-06-11 12:27:01.839092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.975 [2024-06-11 12:27:01.839160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.975 [2024-06-11 12:27:01.839176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.975 [2024-06-11 12:27:01.839182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.975 [2024-06-11 12:27:01.839186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.975 [2024-06-11 12:27:01.839201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 12:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.975 12:27:01 -- host/target_disconnect.sh@58 -- # wait 1705009 00:32:48.975 [2024-06-11 12:27:01.849079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.975 [2024-06-11 12:27:01.849130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.975 [2024-06-11 12:27:01.849142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.975 [2024-06-11 12:27:01.849147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.975 [2024-06-11 12:27:01.849151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.975 [2024-06-11 12:27:01.849163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.859014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.975 [2024-06-11 12:27:01.859074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.975 [2024-06-11 12:27:01.859085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.975 [2024-06-11 12:27:01.859090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.975 [2024-06-11 12:27:01.859095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.975 [2024-06-11 12:27:01.859105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.869066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.975 [2024-06-11 12:27:01.869123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.975 [2024-06-11 12:27:01.869134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.975 [2024-06-11 12:27:01.869139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.975 [2024-06-11 12:27:01.869144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.975 [2024-06-11 12:27:01.869154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.975 [2024-06-11 12:27:01.879105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.975 [2024-06-11 12:27:01.879165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.975 [2024-06-11 12:27:01.879176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.975 [2024-06-11 12:27:01.879183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.975 [2024-06-11 12:27:01.879188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.975 [2024-06-11 12:27:01.879198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.975 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.888979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.889031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.889042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.889047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.889052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.889062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.899105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.899155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.899166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.899171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.899175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.899185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.909122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.909170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.909181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.909186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.909190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.909200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.919183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.919236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.919247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.919252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.919257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.919267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.929192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.929248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.929258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.929264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.929268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.929278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.939228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.939276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.939287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.939291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.939296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.939306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.949258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.949309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.949321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.949326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.949330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.949340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.959280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.959379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.959391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.959396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.959400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.959411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.969212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.969271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.969281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.969289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.969294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.969304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.979371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.979421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.979431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.979436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.979441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.979450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:48.976 [2024-06-11 12:27:01.989371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.976 [2024-06-11 12:27:01.989421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.976 [2024-06-11 12:27:01.989432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.976 [2024-06-11 12:27:01.989437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.976 [2024-06-11 12:27:01.989442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:48.976 [2024-06-11 12:27:01.989452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:48.976 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:01.999487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:01.999539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:01.999550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:01.999554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:01.999559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:01.999569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.009443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.009491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.009501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.009506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.009511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.009521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.019368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.019413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.019424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.019429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.019434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.019444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.029462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.029524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.029535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.029540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.029544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.029554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.039406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.039461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.039472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.039477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.039481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.039491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.049554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.049599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.049610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.049615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.049619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.049629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.059584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.059630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.059643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.059648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.059652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.059662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.069600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.069649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.069660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.069665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.069670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.069680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.079548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.079596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.079607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.079612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.079617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.079627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.089650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.089710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.089721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.089726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.089730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.089740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.099740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.099797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.099808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.099813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.099817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.099830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.109781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.109830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.109841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.109846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.109851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.109860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.119816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.119907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.119917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.119922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.119927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.119937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.129813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.129858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.129869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.129874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.129878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.237 [2024-06-11 12:27:02.129888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.237 qpair failed and we were unable to recover it. 00:32:49.237 [2024-06-11 12:27:02.139823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.237 [2024-06-11 12:27:02.139868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.237 [2024-06-11 12:27:02.139878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.237 [2024-06-11 12:27:02.139883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.237 [2024-06-11 12:27:02.139887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.139897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.149827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.149879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.149895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.149900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.149905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.149915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.159875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.159979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.159990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.159995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.159999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.160009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.169904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.170001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.170013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.170021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.170026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.170036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.179950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.180036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.180047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.180052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.180057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.180068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.189960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.190024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.190035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.190040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.190044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.190057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.199890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.199991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.200002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.200007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.200012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.200027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.209899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.209948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.209958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.209963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.209968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.209978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.220052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.220098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.220109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.220114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.220118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.220128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.230086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.230141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.230151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.230156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.230161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.230171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.240115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.240172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.240185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.240190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.240194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.240204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.250153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.250201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.250212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.250217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.250221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.250231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.260020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.260101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.260111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.260116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.260121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.260131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.238 [2024-06-11 12:27:02.270191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.238 [2024-06-11 12:27:02.270243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.238 [2024-06-11 12:27:02.270254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.238 [2024-06-11 12:27:02.270259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.238 [2024-06-11 12:27:02.270264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.238 [2024-06-11 12:27:02.270273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.238 qpair failed and we were unable to recover it. 00:32:49.498 [2024-06-11 12:27:02.280203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.498 [2024-06-11 12:27:02.280256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.498 [2024-06-11 12:27:02.280267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.280272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.280279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.280289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.290260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.290308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.290319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.290323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.290328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.290337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.300252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.300313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.300323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.300328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.300333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.300342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.310334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.310420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.310431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.310436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.310440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.310452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.320306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.320356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.320366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.320371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.320376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.320385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.330342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.330404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.330414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.330419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.330423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.330433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.340391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.340441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.340452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.340457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.340461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.340471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.350432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.350481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.350492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.350497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.350501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.350511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.360448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.360505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.360515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.360520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.360525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.360534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.370486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.370533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.370544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.370549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.370556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.370566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.380500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.380545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.380555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.380561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.380565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.380576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.390545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.390605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.390622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.390628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.390633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.390647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.400556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.400640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.400651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.400656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.499 [2024-06-11 12:27:02.400661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.499 [2024-06-11 12:27:02.400671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.499 qpair failed and we were unable to recover it. 00:32:49.499 [2024-06-11 12:27:02.410588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.499 [2024-06-11 12:27:02.410649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.499 [2024-06-11 12:27:02.410660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.499 [2024-06-11 12:27:02.410665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.410669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.410679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.420624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.420665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.420676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.420680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.420685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.420695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.430648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.430706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.430717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.430722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.430726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.430736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.440711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.440801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.440815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.440821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.440826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.440837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.450706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.450790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.450808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.450815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.450820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.450833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.460605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.460654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.460666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.460675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.460680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.460690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.470756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.470860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.470871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.470876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.470880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.470891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.480803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.480856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.480875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.480881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.480886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.480899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.490689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.490746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.490759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.490764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.490768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.490778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.500843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.500894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.500912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.500918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.500923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.500936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.510883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.510936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.510948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.510953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.510957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.510968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.520914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.520970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.520981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.520986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.520990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.521000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.500 [2024-06-11 12:27:02.530919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.500 [2024-06-11 12:27:02.530975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.500 [2024-06-11 12:27:02.530986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.500 [2024-06-11 12:27:02.530991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.500 [2024-06-11 12:27:02.530995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.500 [2024-06-11 12:27:02.531005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.500 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.541012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.541066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.541076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.541081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.541085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.541096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.550993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.551047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.551059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.551067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.551071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.551082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.561023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.561074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.561084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.561090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.561094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.561104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.571060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.571110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.571121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.571126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.571131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.571141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.581095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.581144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.581155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.581161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.581165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.581175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.591119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.591165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.591176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.591181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.591185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.591195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.601022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.601073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.601086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.601091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.601096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.601107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.611161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.762 [2024-06-11 12:27:02.611212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.762 [2024-06-11 12:27:02.611223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.762 [2024-06-11 12:27:02.611228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.762 [2024-06-11 12:27:02.611232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.762 [2024-06-11 12:27:02.611243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.762 qpair failed and we were unable to recover it. 00:32:49.762 [2024-06-11 12:27:02.621188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.621238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.621248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.621253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.621258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.621268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.631250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.631297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.631308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.631313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.631317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.631327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.641270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.641322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.641336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.641341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.641345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.641355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.651298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.651344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.651355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.651360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.651364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.651375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.661175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.661224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.661235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.661240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.661244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.661255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.671338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.671388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.671399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.671405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.671409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.671419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.681397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.681448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.681458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.681463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.681468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.681481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.691267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.691313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.691324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.691329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.691334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.691343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.701424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.701478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.701489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.701493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.701498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.701508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.711328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.711393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.711405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.711410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.711415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.711425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.721497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.721569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.721580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.721585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.721590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.721600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.731514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.731562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.731577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.731582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.731587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.731597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.741424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.741476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.741486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.763 [2024-06-11 12:27:02.741491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.763 [2024-06-11 12:27:02.741495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.763 [2024-06-11 12:27:02.741505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.763 qpair failed and we were unable to recover it. 00:32:49.763 [2024-06-11 12:27:02.751579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.763 [2024-06-11 12:27:02.751627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.763 [2024-06-11 12:27:02.751638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.764 [2024-06-11 12:27:02.751642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.764 [2024-06-11 12:27:02.751647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.764 [2024-06-11 12:27:02.751656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.764 qpair failed and we were unable to recover it. 00:32:49.764 [2024-06-11 12:27:02.761607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.764 [2024-06-11 12:27:02.761666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.764 [2024-06-11 12:27:02.761676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.764 [2024-06-11 12:27:02.761681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.764 [2024-06-11 12:27:02.761686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.764 [2024-06-11 12:27:02.761695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.764 qpair failed and we were unable to recover it. 00:32:49.764 [2024-06-11 12:27:02.771632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.764 [2024-06-11 12:27:02.771683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.764 [2024-06-11 12:27:02.771693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.764 [2024-06-11 12:27:02.771698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.764 [2024-06-11 12:27:02.771705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.764 [2024-06-11 12:27:02.771715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.764 qpair failed and we were unable to recover it. 00:32:49.764 [2024-06-11 12:27:02.781666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.764 [2024-06-11 12:27:02.781713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.764 [2024-06-11 12:27:02.781724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.764 [2024-06-11 12:27:02.781729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.764 [2024-06-11 12:27:02.781733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.764 [2024-06-11 12:27:02.781743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.764 qpair failed and we were unable to recover it. 00:32:49.764 [2024-06-11 12:27:02.791719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.764 [2024-06-11 12:27:02.791770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.764 [2024-06-11 12:27:02.791780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.764 [2024-06-11 12:27:02.791785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.764 [2024-06-11 12:27:02.791790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:49.764 [2024-06-11 12:27:02.791800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:49.764 qpair failed and we were unable to recover it. 00:32:50.026 [2024-06-11 12:27:02.801724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.026 [2024-06-11 12:27:02.801777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.026 [2024-06-11 12:27:02.801787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.026 [2024-06-11 12:27:02.801792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.026 [2024-06-11 12:27:02.801797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.026 [2024-06-11 12:27:02.801807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.026 qpair failed and we were unable to recover it. 00:32:50.026 [2024-06-11 12:27:02.811771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.026 [2024-06-11 12:27:02.811827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.026 [2024-06-11 12:27:02.811837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.026 [2024-06-11 12:27:02.811843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.026 [2024-06-11 12:27:02.811847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.026 [2024-06-11 12:27:02.811857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.026 qpair failed and we were unable to recover it. 00:32:50.026 [2024-06-11 12:27:02.821739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.026 [2024-06-11 12:27:02.821794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.026 [2024-06-11 12:27:02.821805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.026 [2024-06-11 12:27:02.821810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.026 [2024-06-11 12:27:02.821814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.026 [2024-06-11 12:27:02.821824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.026 qpair failed and we were unable to recover it. 00:32:50.026 [2024-06-11 12:27:02.831813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.026 [2024-06-11 12:27:02.831860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.026 [2024-06-11 12:27:02.831870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.026 [2024-06-11 12:27:02.831875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.026 [2024-06-11 12:27:02.831880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.026 [2024-06-11 12:27:02.831889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.026 qpair failed and we were unable to recover it. 00:32:50.026 [2024-06-11 12:27:02.841855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.026 [2024-06-11 12:27:02.841906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.026 [2024-06-11 12:27:02.841917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.026 [2024-06-11 12:27:02.841921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.026 [2024-06-11 12:27:02.841926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.026 [2024-06-11 12:27:02.841935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.026 qpair failed and we were unable to recover it. 00:32:50.026 [2024-06-11 12:27:02.851868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.026 [2024-06-11 12:27:02.851914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.026 [2024-06-11 12:27:02.851925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.026 [2024-06-11 12:27:02.851929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.026 [2024-06-11 12:27:02.851934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.026 [2024-06-11 12:27:02.851943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.026 qpair failed and we were unable to recover it. 00:32:50.026 [2024-06-11 12:27:02.861909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.026 [2024-06-11 12:27:02.861959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.026 [2024-06-11 12:27:02.861969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.026 [2024-06-11 12:27:02.861974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.026 [2024-06-11 12:27:02.861981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.026 [2024-06-11 12:27:02.861991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.026 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.871896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.871945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.871957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.871962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.871966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.871976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.881963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.882019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.882031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.882036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.882040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.882050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.891983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.892026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.892037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.892042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.892046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.892056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.901889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.901942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.901953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.901958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.901963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.901973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.912051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.912101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.912112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.912117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.912121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.912132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.921948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.922003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.922014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.922023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.922028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.922039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.932072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.932131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.932142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.932146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.932151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.932161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.942021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.942066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.942077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.942082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.942086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.942096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.952172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.952221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.952231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.952239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.952243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.952254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.962169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.962220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.962230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.962235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.962240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.962250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.972210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.972255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.972265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.972270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.972274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.972284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.982255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.027 [2024-06-11 12:27:02.982302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.027 [2024-06-11 12:27:02.982313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.027 [2024-06-11 12:27:02.982319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.027 [2024-06-11 12:27:02.982323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.027 [2024-06-11 12:27:02.982333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.027 qpair failed and we were unable to recover it. 00:32:50.027 [2024-06-11 12:27:02.992269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.028 [2024-06-11 12:27:02.992318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.028 [2024-06-11 12:27:02.992329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.028 [2024-06-11 12:27:02.992334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.028 [2024-06-11 12:27:02.992339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.028 [2024-06-11 12:27:02.992348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.028 qpair failed and we were unable to recover it. 00:32:50.028 [2024-06-11 12:27:03.002314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.028 [2024-06-11 12:27:03.002370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.028 [2024-06-11 12:27:03.002381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.028 [2024-06-11 12:27:03.002385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.028 [2024-06-11 12:27:03.002390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.028 [2024-06-11 12:27:03.002400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.028 qpair failed and we were unable to recover it. 00:32:50.028 [2024-06-11 12:27:03.012305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.028 [2024-06-11 12:27:03.012359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.028 [2024-06-11 12:27:03.012370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.028 [2024-06-11 12:27:03.012375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.028 [2024-06-11 12:27:03.012379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.028 [2024-06-11 12:27:03.012389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.028 qpair failed and we were unable to recover it. 00:32:50.028 [2024-06-11 12:27:03.022244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.028 [2024-06-11 12:27:03.022341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.028 [2024-06-11 12:27:03.022352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.028 [2024-06-11 12:27:03.022357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.028 [2024-06-11 12:27:03.022362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.028 [2024-06-11 12:27:03.022371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.028 qpair failed and we were unable to recover it. 00:32:50.028 [2024-06-11 12:27:03.032394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.028 [2024-06-11 12:27:03.032441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.028 [2024-06-11 12:27:03.032451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.028 [2024-06-11 12:27:03.032456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.028 [2024-06-11 12:27:03.032460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.028 [2024-06-11 12:27:03.032470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.028 qpair failed and we were unable to recover it. 00:32:50.028 [2024-06-11 12:27:03.042414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.028 [2024-06-11 12:27:03.042471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.028 [2024-06-11 12:27:03.042482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.028 [2024-06-11 12:27:03.042489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.028 [2024-06-11 12:27:03.042494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.028 [2024-06-11 12:27:03.042504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.028 qpair failed and we were unable to recover it. 00:32:50.028 [2024-06-11 12:27:03.052452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.028 [2024-06-11 12:27:03.052500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.028 [2024-06-11 12:27:03.052511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.028 [2024-06-11 12:27:03.052516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.028 [2024-06-11 12:27:03.052520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.028 [2024-06-11 12:27:03.052530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.028 qpair failed and we were unable to recover it. 00:32:50.291 [2024-06-11 12:27:03.062470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.291 [2024-06-11 12:27:03.062526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.291 [2024-06-11 12:27:03.062537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.291 [2024-06-11 12:27:03.062542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.291 [2024-06-11 12:27:03.062546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.291 [2024-06-11 12:27:03.062556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.291 qpair failed and we were unable to recover it. 00:32:50.291 [2024-06-11 12:27:03.072383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.291 [2024-06-11 12:27:03.072433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.291 [2024-06-11 12:27:03.072444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.072449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.072453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.072463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.082506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.082557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.082568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.082573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.082577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.082587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.092566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.092614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.092625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.092630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.092634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.092644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.102601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.102648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.102658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.102663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.102668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.102677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.112620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.112667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.112678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.112683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.112687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.112697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.122645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.122701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.122711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.122716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.122721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.122731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.132681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.132726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.132739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.132744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.132749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.132758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.142703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.142760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.142771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.142775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.142780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.142790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.152732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.152777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.152787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.152792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.152796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.152806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.162703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.162761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.162772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.162776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.162781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.162790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.172666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.172719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.172730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.172735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.172739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.172752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.182831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.182884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.182902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.182908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.182912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.182926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.192850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.292 [2024-06-11 12:27:03.192902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.292 [2024-06-11 12:27:03.192915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.292 [2024-06-11 12:27:03.192920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.292 [2024-06-11 12:27:03.192925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.292 [2024-06-11 12:27:03.192935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.292 qpair failed and we were unable to recover it. 00:32:50.292 [2024-06-11 12:27:03.202902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.202956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.202966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.202971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.202976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.202986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.212917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.212965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.212976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.212980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.212985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.212995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.222837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.222885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.222900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.222906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.222911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.222921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.232954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.233004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.233015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.233024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.233028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.233039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.243006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.243081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.243091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.243096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.243101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.243111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.253024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.253070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.253081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.253086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.253091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.253101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.263081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.263131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.263142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.263147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.263151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.263164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.273062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.273122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.273133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.273138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.273142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.273152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.283120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.283170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.283181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.283186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.283190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.283200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.293004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.293065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.293076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.293081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.293085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.293095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.303169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.303213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.303223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.303228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.303232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.293 [2024-06-11 12:27:03.303242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.293 qpair failed and we were unable to recover it. 00:32:50.293 [2024-06-11 12:27:03.313201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.293 [2024-06-11 12:27:03.313251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.293 [2024-06-11 12:27:03.313263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.293 [2024-06-11 12:27:03.313268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.293 [2024-06-11 12:27:03.313272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.294 [2024-06-11 12:27:03.313282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.294 qpair failed and we were unable to recover it. 00:32:50.294 [2024-06-11 12:27:03.323220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.294 [2024-06-11 12:27:03.323273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.294 [2024-06-11 12:27:03.323284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.294 [2024-06-11 12:27:03.323289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.294 [2024-06-11 12:27:03.323293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.294 [2024-06-11 12:27:03.323303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.294 qpair failed and we were unable to recover it. 00:32:50.555 [2024-06-11 12:27:03.333267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.555 [2024-06-11 12:27:03.333318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.555 [2024-06-11 12:27:03.333329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.555 [2024-06-11 12:27:03.333334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.555 [2024-06-11 12:27:03.333338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.555 [2024-06-11 12:27:03.333348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.555 qpair failed and we were unable to recover it. 00:32:50.555 [2024-06-11 12:27:03.343214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.555 [2024-06-11 12:27:03.343261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.555 [2024-06-11 12:27:03.343272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.555 [2024-06-11 12:27:03.343277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.555 [2024-06-11 12:27:03.343282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.555 [2024-06-11 12:27:03.343292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.555 qpair failed and we were unable to recover it. 00:32:50.555 [2024-06-11 12:27:03.353316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.555 [2024-06-11 12:27:03.353403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.555 [2024-06-11 12:27:03.353414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.555 [2024-06-11 12:27:03.353419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.555 [2024-06-11 12:27:03.353427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.555 [2024-06-11 12:27:03.353437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.555 qpair failed and we were unable to recover it. 00:32:50.555 [2024-06-11 12:27:03.363351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.555 [2024-06-11 12:27:03.363405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.555 [2024-06-11 12:27:03.363416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.555 [2024-06-11 12:27:03.363421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.555 [2024-06-11 12:27:03.363425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.555 [2024-06-11 12:27:03.363435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.555 qpair failed and we were unable to recover it. 00:32:50.555 [2024-06-11 12:27:03.373373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.555 [2024-06-11 12:27:03.373418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.555 [2024-06-11 12:27:03.373429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.555 [2024-06-11 12:27:03.373433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.555 [2024-06-11 12:27:03.373438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.373448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.383402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.383445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.383456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.383461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.383465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.383475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.393435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.393486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.393497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.393502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.393507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.393516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.403436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.403497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.403508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.403512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.403517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.403526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.413492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.413542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.413553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.413558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.413562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.413572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.423535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.423582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.423593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.423598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.423602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.423612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.433466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.433559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.433571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.433577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.433581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.433591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.443584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.443629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.443640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.443647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.443652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.443662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.453606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.453658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.453669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.453674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.453678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.453688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.463659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.463751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.463763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.463768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.463772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.463782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.473650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.473699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.473710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.473715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.473719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.473729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.483649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.483701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.483712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.483716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.483721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.483731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.493621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.493663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.493673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.493678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.493683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.493692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.503749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.556 [2024-06-11 12:27:03.503792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.556 [2024-06-11 12:27:03.503803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.556 [2024-06-11 12:27:03.503807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.556 [2024-06-11 12:27:03.503812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.556 [2024-06-11 12:27:03.503821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.556 qpair failed and we were unable to recover it. 00:32:50.556 [2024-06-11 12:27:03.513773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.513821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.513831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.513836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.513840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.513850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.557 [2024-06-11 12:27:03.523800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.523858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.523869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.523874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.523878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.523888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.557 [2024-06-11 12:27:03.533842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.533886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.533897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.533907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.533912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.533921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.557 [2024-06-11 12:27:03.543887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.543967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.543986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.543992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.543997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.544010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.557 [2024-06-11 12:27:03.553909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.553958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.553970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.553975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.553979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.553990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.557 [2024-06-11 12:27:03.563955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.564004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.564015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.564023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.564027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.564038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.557 [2024-06-11 12:27:03.573832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.573878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.573889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.573894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.573898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.573908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.557 [2024-06-11 12:27:03.583989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.557 [2024-06-11 12:27:03.584034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.557 [2024-06-11 12:27:03.584045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.557 [2024-06-11 12:27:03.584050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.557 [2024-06-11 12:27:03.584055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.557 [2024-06-11 12:27:03.584065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.557 qpair failed and we were unable to recover it. 00:32:50.818 [2024-06-11 12:27:03.594002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.818 [2024-06-11 12:27:03.594053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.818 [2024-06-11 12:27:03.594064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.818 [2024-06-11 12:27:03.594069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.818 [2024-06-11 12:27:03.594073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.818 [2024-06-11 12:27:03.594083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.818 qpair failed and we were unable to recover it. 00:32:50.818 [2024-06-11 12:27:03.603924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.818 [2024-06-11 12:27:03.603980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.818 [2024-06-11 12:27:03.603991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.818 [2024-06-11 12:27:03.603996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.818 [2024-06-11 12:27:03.604001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.818 [2024-06-11 12:27:03.604011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.818 qpair failed and we were unable to recover it. 00:32:50.818 [2024-06-11 12:27:03.613948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.818 [2024-06-11 12:27:03.614011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.818 [2024-06-11 12:27:03.614025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.614030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.614035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.614045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.624123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.624167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.624181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.624186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.624190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.624200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.634131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.634177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.634188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.634193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.634197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.634207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.644163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.644217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.644228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.644233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.644237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.644247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.654192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.654240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.654251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.654256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.654260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.654271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.664219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.664265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.664276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.664281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.664286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.664299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.674238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.674289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.674300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.674305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.674309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.674319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.684291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.684350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.684360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.684365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.684370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.684380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.694356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.694437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.694448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.694453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.694457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.694468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.704343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.704390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.704401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.704407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.704413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.704423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.714368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.714419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.714433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.714438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.714443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.714453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.724411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.724465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.724476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.724480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.724485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.724495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.734415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.734464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.734474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.734479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.734484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.734493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.819 qpair failed and we were unable to recover it. 00:32:50.819 [2024-06-11 12:27:03.744432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.819 [2024-06-11 12:27:03.744483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.819 [2024-06-11 12:27:03.744494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.819 [2024-06-11 12:27:03.744498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.819 [2024-06-11 12:27:03.744503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.819 [2024-06-11 12:27:03.744513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.754363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.754412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.754424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.754428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.754433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.754446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.764519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.764609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.764620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.764624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.764630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.764641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.774544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.774594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.774605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.774610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.774614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.774624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.784568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.784616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.784626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.784631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.784636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.784646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.794617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.794670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.794681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.794685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.794690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.794700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.804513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.804570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.804583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.804588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.804592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.804602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.814647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.814689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.814700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.814705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.814709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.814719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.824682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.824740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.824751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.824756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.824760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.824770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.834717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.834767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.834778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.834783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.834787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.834797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:50.820 [2024-06-11 12:27:03.844753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.820 [2024-06-11 12:27:03.844811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.820 [2024-06-11 12:27:03.844821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.820 [2024-06-11 12:27:03.844828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.820 [2024-06-11 12:27:03.844835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:50.820 [2024-06-11 12:27:03.844845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:50.820 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.854764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.854813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.854824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.854829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.854834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.854843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.864666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.864716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.864727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.864731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.864736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.864746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.874835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.874886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.874897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.874902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.874906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.874916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.884876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.884932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.884944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.884948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.884953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.884963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.894792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.894893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.894904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.894909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.894914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.894924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.904901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.904946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.904957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.904962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.904966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.904977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.914960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.915006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.915020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.915025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.915030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.915040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.925024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.925086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.925097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.925102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.925106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.925116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.935026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.082 [2024-06-11 12:27:03.935073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.082 [2024-06-11 12:27:03.935083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.082 [2024-06-11 12:27:03.935088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.082 [2024-06-11 12:27:03.935095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.082 [2024-06-11 12:27:03.935106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.082 qpair failed and we were unable to recover it. 00:32:51.082 [2024-06-11 12:27:03.945000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:03.945050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:03.945061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:03.945066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:03.945071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:03.945081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:03.954935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:03.954986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:03.954997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:03.955002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:03.955007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:03.955025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:03.965100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:03.965154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:03.965164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:03.965169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:03.965174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:03.965184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:03.975128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:03.975180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:03.975191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:03.975196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:03.975200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:03.975210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:03.985169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:03.985218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:03.985229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:03.985234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:03.985238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:03.985248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:03.995186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:03.995234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:03.995245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:03.995250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:03.995255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:03.995265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.005230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.005290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.005301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.005306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.005310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.005320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.015253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.015298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.015309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.015314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.015318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.015329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.025243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.025291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.025302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.025310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.025314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.025324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.035319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.035368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.035379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.035384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.035388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.035398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.045329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.045380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.045390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.045396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.045400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.045410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.055360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.055413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.055423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.055428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.055433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.055443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.065395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.065443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.065454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.065459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.065463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.065473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.075388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.075440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.075452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.075457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.075461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.075471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.085454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.085504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.085515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.085521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.085525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.085535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.095572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.095652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.095663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.095668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.095672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.095682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.105540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.105590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.105600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.083 [2024-06-11 12:27:04.105605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.083 [2024-06-11 12:27:04.105609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.083 [2024-06-11 12:27:04.105620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.083 qpair failed and we were unable to recover it. 00:32:51.083 [2024-06-11 12:27:04.115649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.083 [2024-06-11 12:27:04.115707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.083 [2024-06-11 12:27:04.115720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.115725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.115733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.115743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.125667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.125734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.125748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.125754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.125758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.125772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.135640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.135686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.135696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.135701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.135706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.135716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.145483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.145527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.145538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.145542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.145547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.145557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.155525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.155585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.155596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.155601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.155605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.155615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.165679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.165730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.165741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.165746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.165750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.165760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.175705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.175754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.175766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.175771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.175775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.175785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.185739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.185782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.185793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.185798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.185803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.185813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.195635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.195680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.195691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.195696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.195700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.195710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.205834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.205883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.205896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.205901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.345 [2024-06-11 12:27:04.205905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.345 [2024-06-11 12:27:04.205915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.345 qpair failed and we were unable to recover it. 00:32:51.345 [2024-06-11 12:27:04.215713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.345 [2024-06-11 12:27:04.215766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.345 [2024-06-11 12:27:04.215777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.345 [2024-06-11 12:27:04.215782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.215786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.215796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.225851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.225900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.225911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.225916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.225920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.225930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.235892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.235941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.235952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.235956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.235961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.235970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.245920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.245967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.245978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.245983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.245988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.246003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.255932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.255980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.255991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.255997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.256001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.256011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.265967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.266021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.266033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.266037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.266042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.266052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.276013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.276068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.276079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.276084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.276088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.276098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.285946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.286039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.286050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.286055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.286059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.286070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.296030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.296088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.296101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.296106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.296110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.296120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.306083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.306134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.306144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.306149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.306154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.306164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.316135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.316183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.316193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.316198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.316202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.316212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.326162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.326216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.326226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.326231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.326236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.326245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.336211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.336293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.336304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.336309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.336317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.346 [2024-06-11 12:27:04.336327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.346 qpair failed and we were unable to recover it. 00:32:51.346 [2024-06-11 12:27:04.346328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.346 [2024-06-11 12:27:04.346399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.346 [2024-06-11 12:27:04.346410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.346 [2024-06-11 12:27:04.346414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.346 [2024-06-11 12:27:04.346419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.347 [2024-06-11 12:27:04.346429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.347 qpair failed and we were unable to recover it. 00:32:51.347 [2024-06-11 12:27:04.356294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.347 [2024-06-11 12:27:04.356345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.347 [2024-06-11 12:27:04.356355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.347 [2024-06-11 12:27:04.356360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.347 [2024-06-11 12:27:04.356365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.347 [2024-06-11 12:27:04.356374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.347 qpair failed and we were unable to recover it. 00:32:51.347 [2024-06-11 12:27:04.366265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.347 [2024-06-11 12:27:04.366319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.347 [2024-06-11 12:27:04.366330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.347 [2024-06-11 12:27:04.366335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.347 [2024-06-11 12:27:04.366339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.347 [2024-06-11 12:27:04.366349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.347 qpair failed and we were unable to recover it. 00:32:51.347 [2024-06-11 12:27:04.376226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.347 [2024-06-11 12:27:04.376279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.347 [2024-06-11 12:27:04.376289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.347 [2024-06-11 12:27:04.376294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.347 [2024-06-11 12:27:04.376299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.347 [2024-06-11 12:27:04.376308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.347 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.386336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.386386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.386396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.386401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.386405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.386415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.396351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.396400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.396411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.396415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.396420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.396429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.406301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.406355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.406365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.406371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.406376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.406385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.416387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.416434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.416445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.416450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.416454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.416464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.426320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.426368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.426379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.426383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.426390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.426400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.436350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.436403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.436414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.436419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.436424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.436433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.446508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.446556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.446567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.446571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.446576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.446585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.456394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.456443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.456453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.456458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.456463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.456473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.466554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.466605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.466616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.466621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.466625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.466634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.476591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.476639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.476650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.476655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.476660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.476669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.486622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.486674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.486685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.486689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.486694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.486704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.496634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.496680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.496690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.496696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.496700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.496710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.506658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.506714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.506725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.506730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.506734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.506744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.516704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.516754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.516764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.516772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.516776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.516786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.526724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.526775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.526786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.526791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.526795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.526805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.536716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.536768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.536779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.536784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.536788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.536797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.546647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.546697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.546707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.546712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.546716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.546726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.556811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.556859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.556869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.556874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.556878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.556887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.566824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.566870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.566882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.566887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.566891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.566900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.576740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.576796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.576806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.576811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.576816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.576825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.586761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.586813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.586823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.586828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.586832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.586842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.596921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.596994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.609 [2024-06-11 12:27:04.597005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.609 [2024-06-11 12:27:04.597009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.609 [2024-06-11 12:27:04.597014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.609 [2024-06-11 12:27:04.597026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.609 qpair failed and we were unable to recover it. 00:32:51.609 [2024-06-11 12:27:04.606974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.609 [2024-06-11 12:27:04.607030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.610 [2024-06-11 12:27:04.607041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.610 [2024-06-11 12:27:04.607049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.610 [2024-06-11 12:27:04.607053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.610 [2024-06-11 12:27:04.607063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.610 qpair failed and we were unable to recover it. 00:32:51.610 [2024-06-11 12:27:04.616982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.610 [2024-06-11 12:27:04.617031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.610 [2024-06-11 12:27:04.617041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.610 [2024-06-11 12:27:04.617046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.610 [2024-06-11 12:27:04.617051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.610 [2024-06-11 12:27:04.617060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.610 qpair failed and we were unable to recover it. 00:32:51.610 [2024-06-11 12:27:04.626879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.610 [2024-06-11 12:27:04.626972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.610 [2024-06-11 12:27:04.626982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.610 [2024-06-11 12:27:04.626987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.610 [2024-06-11 12:27:04.626992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.610 [2024-06-11 12:27:04.627002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.610 qpair failed and we were unable to recover it. 00:32:51.610 [2024-06-11 12:27:04.637029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.610 [2024-06-11 12:27:04.637079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.610 [2024-06-11 12:27:04.637089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.610 [2024-06-11 12:27:04.637094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.610 [2024-06-11 12:27:04.637098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.610 [2024-06-11 12:27:04.637108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.610 qpair failed and we were unable to recover it. 00:32:51.872 [2024-06-11 12:27:04.647071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.872 [2024-06-11 12:27:04.647167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.872 [2024-06-11 12:27:04.647178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.872 [2024-06-11 12:27:04.647183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.872 [2024-06-11 12:27:04.647188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.872 [2024-06-11 12:27:04.647198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.872 qpair failed and we were unable to recover it. 00:32:51.872 [2024-06-11 12:27:04.657080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.872 [2024-06-11 12:27:04.657176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.872 [2024-06-11 12:27:04.657187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.872 [2024-06-11 12:27:04.657191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.872 [2024-06-11 12:27:04.657196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.872 [2024-06-11 12:27:04.657205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.872 qpair failed and we were unable to recover it. 00:32:51.872 [2024-06-11 12:27:04.667155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.872 [2024-06-11 12:27:04.667215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.872 [2024-06-11 12:27:04.667225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.872 [2024-06-11 12:27:04.667230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.872 [2024-06-11 12:27:04.667234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.872 [2024-06-11 12:27:04.667244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.872 qpair failed and we were unable to recover it. 00:32:51.872 [2024-06-11 12:27:04.677163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.872 [2024-06-11 12:27:04.677248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.872 [2024-06-11 12:27:04.677259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.872 [2024-06-11 12:27:04.677263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.872 [2024-06-11 12:27:04.677268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.872 [2024-06-11 12:27:04.677278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.872 qpair failed and we were unable to recover it. 00:32:51.872 [2024-06-11 12:27:04.687172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.872 [2024-06-11 12:27:04.687225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.872 [2024-06-11 12:27:04.687235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.687240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.687244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.687254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.697064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.697111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.697125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.697130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.697134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.697144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.707204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.707251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.707262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.707267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.707272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.707281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.717261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.717310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.717321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.717326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.717330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.717340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.727310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.727361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.727371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.727376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.727381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.727390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.737198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.737252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.737262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.737267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.737271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.737284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.747355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.747403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.747413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.747418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.747422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.747433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.757385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.757437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.757448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.757452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.757457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.757466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.767408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.767461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.767471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.767476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.767480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.767490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.777436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.777531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.777542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.777547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.777551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.777561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.787421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.787471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.787485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.787490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.787494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.787504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.797356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.797409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.797419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.797424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.797429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.797438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.807529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.807574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.807584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.807589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.807594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.807604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.873 qpair failed and we were unable to recover it. 00:32:51.873 [2024-06-11 12:27:04.817547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.873 [2024-06-11 12:27:04.817596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.873 [2024-06-11 12:27:04.817606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.873 [2024-06-11 12:27:04.817611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.873 [2024-06-11 12:27:04.817616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.873 [2024-06-11 12:27:04.817625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.827560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.827605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.827616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.827621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.827626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.827639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.837596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.837645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.837656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.837661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.837665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.837675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.847618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.847719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.847730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.847735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.847740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.847750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.857688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.857739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.857750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.857755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.857759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.857769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.867681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.867731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.867742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.867747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.867751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.867761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.877706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.877755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.877766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.877771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.877775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.877785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.887717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.887771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.887782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.887787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.887791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.887801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:51.874 [2024-06-11 12:27:04.897718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.874 [2024-06-11 12:27:04.897765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.874 [2024-06-11 12:27:04.897776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.874 [2024-06-11 12:27:04.897781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.874 [2024-06-11 12:27:04.897785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:51.874 [2024-06-11 12:27:04.897795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:51.874 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.907645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.907703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.907714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.907720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.907724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.907734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.917710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.917792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.917803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.917808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.917817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.917827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.927720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.927781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.927792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.927796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.927801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.927811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.937874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.937923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.937934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.937939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.937943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.937953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.947906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.947956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.947966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.947971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.947976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.947985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.958004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.958053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.958064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.958069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.958073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.958083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.967952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.968004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.968015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.968025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.968029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.968039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.977979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.978024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.978035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.978039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.978044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.978054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.988012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.988064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.988074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.988079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.988084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.988094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:04.998039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:04.998106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:04.998116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:04.998121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:04.998126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:04.998135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:05.008039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:05.008091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:05.008101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:05.008109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:05.008113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:05.008123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:05.018015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:05.018063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:05.018074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:05.018079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.137 [2024-06-11 12:27:05.018084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.137 [2024-06-11 12:27:05.018093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.137 qpair failed and we were unable to recover it. 00:32:52.137 [2024-06-11 12:27:05.028126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.137 [2024-06-11 12:27:05.028172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.137 [2024-06-11 12:27:05.028182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.137 [2024-06-11 12:27:05.028187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.028191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.028200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.038130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.038181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.038192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.038196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.038201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.038210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.048147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.048197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.048208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.048213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.048217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.048227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.058163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.058244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.058254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.058259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.058264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.058274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.068221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.068264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.068274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.068279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.068283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.068293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.078178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.078228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.078238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.078243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.078247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.078257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.088255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.088304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.088315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.088320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.088324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.088334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.098291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.098336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.098347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.098354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.098359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.098369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.108385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.108451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.108462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.108466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.108471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.108481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.118395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.118443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.118453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.118458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.118462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.118472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.128369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.128443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.128454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.128459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.128463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.128473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.138376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.138423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.138434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.138439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.138443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.138453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.148438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.148485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.148496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.148501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.148505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.148515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.158483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.138 [2024-06-11 12:27:05.158529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.138 [2024-06-11 12:27:05.158539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.138 [2024-06-11 12:27:05.158544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.138 [2024-06-11 12:27:05.158549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.138 [2024-06-11 12:27:05.158558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.138 qpair failed and we were unable to recover it. 00:32:52.138 [2024-06-11 12:27:05.168439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.139 [2024-06-11 12:27:05.168487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.139 [2024-06-11 12:27:05.168498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.139 [2024-06-11 12:27:05.168503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.139 [2024-06-11 12:27:05.168508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.139 [2024-06-11 12:27:05.168517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.139 qpair failed and we were unable to recover it. 00:32:52.401 [2024-06-11 12:27:05.178493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.401 [2024-06-11 12:27:05.178535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.401 [2024-06-11 12:27:05.178546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.401 [2024-06-11 12:27:05.178550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.401 [2024-06-11 12:27:05.178555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.401 [2024-06-11 12:27:05.178564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.401 qpair failed and we were unable to recover it. 00:32:52.401 [2024-06-11 12:27:05.188425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.401 [2024-06-11 12:27:05.188469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.401 [2024-06-11 12:27:05.188483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.401 [2024-06-11 12:27:05.188488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.401 [2024-06-11 12:27:05.188492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.401 [2024-06-11 12:27:05.188502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.401 qpair failed and we were unable to recover it. 00:32:52.401 [2024-06-11 12:27:05.198459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.401 [2024-06-11 12:27:05.198513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.401 [2024-06-11 12:27:05.198524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.401 [2024-06-11 12:27:05.198529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.401 [2024-06-11 12:27:05.198533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.401 [2024-06-11 12:27:05.198543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.401 qpair failed and we were unable to recover it. 00:32:52.401 [2024-06-11 12:27:05.208588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.401 [2024-06-11 12:27:05.208636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.401 [2024-06-11 12:27:05.208647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.401 [2024-06-11 12:27:05.208652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.401 [2024-06-11 12:27:05.208657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.401 [2024-06-11 12:27:05.208666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.401 qpair failed and we were unable to recover it. 00:32:52.401 [2024-06-11 12:27:05.218581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.401 [2024-06-11 12:27:05.218624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.401 [2024-06-11 12:27:05.218635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.401 [2024-06-11 12:27:05.218640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.401 [2024-06-11 12:27:05.218644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.401 [2024-06-11 12:27:05.218654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.401 qpair failed and we were unable to recover it. 00:32:52.401 [2024-06-11 12:27:05.228553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.401 [2024-06-11 12:27:05.228600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.401 [2024-06-11 12:27:05.228611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.401 [2024-06-11 12:27:05.228615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.401 [2024-06-11 12:27:05.228620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.401 [2024-06-11 12:27:05.228632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.401 qpair failed and we were unable to recover it. 00:32:52.401 [2024-06-11 12:27:05.238712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.238762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.238774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.238780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.238786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.238797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.248691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.248785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.248795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.248801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.248805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.248815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.258728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.258769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.258780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.258785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.258789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.258799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.268665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.268706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.268717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.268722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.268726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.268736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.278678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.278729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.278742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.278747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.278752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.278762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.288805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.288848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.288859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.288864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.288868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.288878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.298824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.298868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.298879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.298884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.298888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.298898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.308861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.308904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.308914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.308919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.308924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.308933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.318934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.318981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.318992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.318997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.319001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.319014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.328786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.328837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.328848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.328852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.328857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.328867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.338919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.338988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.338999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.339004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.339008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.339022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.348935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.348973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.348983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.348988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.348992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.349002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.359040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.359088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.359099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.402 [2024-06-11 12:27:05.359104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.402 [2024-06-11 12:27:05.359108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.402 [2024-06-11 12:27:05.359118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.402 qpair failed and we were unable to recover it. 00:32:52.402 [2024-06-11 12:27:05.369044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.402 [2024-06-11 12:27:05.369089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.402 [2024-06-11 12:27:05.369103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.403 [2024-06-11 12:27:05.369108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.403 [2024-06-11 12:27:05.369112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.403 [2024-06-11 12:27:05.369122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.403 qpair failed and we were unable to recover it. 00:32:52.403 [2024-06-11 12:27:05.379072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.403 [2024-06-11 12:27:05.379113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.403 [2024-06-11 12:27:05.379123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.403 [2024-06-11 12:27:05.379128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.403 [2024-06-11 12:27:05.379132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.403 [2024-06-11 12:27:05.379142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.403 qpair failed and we were unable to recover it. 00:32:52.403 [2024-06-11 12:27:05.389107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.403 [2024-06-11 12:27:05.389147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.403 [2024-06-11 12:27:05.389157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.403 [2024-06-11 12:27:05.389162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.403 [2024-06-11 12:27:05.389166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.403 [2024-06-11 12:27:05.389176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.403 qpair failed and we were unable to recover it. 00:32:52.403 [2024-06-11 12:27:05.399160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.403 [2024-06-11 12:27:05.399206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.403 [2024-06-11 12:27:05.399217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.403 [2024-06-11 12:27:05.399221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.403 [2024-06-11 12:27:05.399226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.403 [2024-06-11 12:27:05.399236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.403 qpair failed and we were unable to recover it. 00:32:52.403 [2024-06-11 12:27:05.409151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.403 [2024-06-11 12:27:05.409199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.403 [2024-06-11 12:27:05.409210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.403 [2024-06-11 12:27:05.409215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.403 [2024-06-11 12:27:05.409222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.403 [2024-06-11 12:27:05.409233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.403 qpair failed and we were unable to recover it. 00:32:52.403 [2024-06-11 12:27:05.419178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.403 [2024-06-11 12:27:05.419220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.403 [2024-06-11 12:27:05.419231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.403 [2024-06-11 12:27:05.419236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.403 [2024-06-11 12:27:05.419241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.403 [2024-06-11 12:27:05.419251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.403 qpair failed and we were unable to recover it. 00:32:52.403 [2024-06-11 12:27:05.429208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.403 [2024-06-11 12:27:05.429255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.403 [2024-06-11 12:27:05.429265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.403 [2024-06-11 12:27:05.429270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.403 [2024-06-11 12:27:05.429274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.403 [2024-06-11 12:27:05.429284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.403 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.439299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.692 [2024-06-11 12:27:05.439346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.692 [2024-06-11 12:27:05.439356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.692 [2024-06-11 12:27:05.439361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.692 [2024-06-11 12:27:05.439366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.692 [2024-06-11 12:27:05.439376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.692 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.449125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.692 [2024-06-11 12:27:05.449178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.692 [2024-06-11 12:27:05.449189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.692 [2024-06-11 12:27:05.449194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.692 [2024-06-11 12:27:05.449199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.692 [2024-06-11 12:27:05.449210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.692 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.459251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.692 [2024-06-11 12:27:05.459300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.692 [2024-06-11 12:27:05.459311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.692 [2024-06-11 12:27:05.459316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.692 [2024-06-11 12:27:05.459321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.692 [2024-06-11 12:27:05.459330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.692 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.469328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.692 [2024-06-11 12:27:05.469368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.692 [2024-06-11 12:27:05.469378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.692 [2024-06-11 12:27:05.469383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.692 [2024-06-11 12:27:05.469388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.692 [2024-06-11 12:27:05.469397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.692 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.479373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.692 [2024-06-11 12:27:05.479420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.692 [2024-06-11 12:27:05.479430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.692 [2024-06-11 12:27:05.479436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.692 [2024-06-11 12:27:05.479440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.692 [2024-06-11 12:27:05.479450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.692 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.489364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.692 [2024-06-11 12:27:05.489454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.692 [2024-06-11 12:27:05.489465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.692 [2024-06-11 12:27:05.489469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.692 [2024-06-11 12:27:05.489474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.692 [2024-06-11 12:27:05.489484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.692 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.499398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.692 [2024-06-11 12:27:05.499444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.692 [2024-06-11 12:27:05.499454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.692 [2024-06-11 12:27:05.499459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.692 [2024-06-11 12:27:05.499466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.692 [2024-06-11 12:27:05.499476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.692 qpair failed and we were unable to recover it. 00:32:52.692 [2024-06-11 12:27:05.509394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.509437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.509448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.509453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.509458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.509467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.519393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.519444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.519455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.519460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.519465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.519475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.529460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.529505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.529515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.529520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.529525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.529534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.539577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.539635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.539645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.539650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.539655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.539665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.549566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.549609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.549619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.549624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.549629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.549638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.559513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.559610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.559621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.559626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.559631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.559641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.569606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.569651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.569661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.569667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.569671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.569681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.579607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.579651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.579661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.579666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.579670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.579680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.589641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.589682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.589693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.589701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.589705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.589715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.599712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.599772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.599785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.599790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.599795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.599805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.609698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.609753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.609771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.609777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.609782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.609795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.619783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.619859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.619871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.619876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.619880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.619892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.629770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.629845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.629864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.693 [2024-06-11 12:27:05.629870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.693 [2024-06-11 12:27:05.629875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.693 [2024-06-11 12:27:05.629889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.693 qpair failed and we were unable to recover it. 00:32:52.693 [2024-06-11 12:27:05.639709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.693 [2024-06-11 12:27:05.639757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.693 [2024-06-11 12:27:05.639769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.639774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.639778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.639789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.694 [2024-06-11 12:27:05.649836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.694 [2024-06-11 12:27:05.649879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.694 [2024-06-11 12:27:05.649891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.649896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.649900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.649911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.694 [2024-06-11 12:27:05.659892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.694 [2024-06-11 12:27:05.659964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.694 [2024-06-11 12:27:05.659975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.659980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.659984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.659993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.694 [2024-06-11 12:27:05.669924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.694 [2024-06-11 12:27:05.669967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.694 [2024-06-11 12:27:05.669978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.669983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.669987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.669997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.694 [2024-06-11 12:27:05.679814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.694 [2024-06-11 12:27:05.679865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.694 [2024-06-11 12:27:05.679879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.679884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.679889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.679898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.694 [2024-06-11 12:27:05.689945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.694 [2024-06-11 12:27:05.689991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.694 [2024-06-11 12:27:05.690001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.690006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.690011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.690025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.694 [2024-06-11 12:27:05.699845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.694 [2024-06-11 12:27:05.699886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.694 [2024-06-11 12:27:05.699897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.699902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.699906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.699916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.694 [2024-06-11 12:27:05.709960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.694 [2024-06-11 12:27:05.710002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.694 [2024-06-11 12:27:05.710013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.694 [2024-06-11 12:27:05.710024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.694 [2024-06-11 12:27:05.710029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.694 [2024-06-11 12:27:05.710039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.694 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.720056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.720107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.720117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.720122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.720127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.720137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.730056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.730105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.730115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.730120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.730124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.730135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.740077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.740120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.740130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.740135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.740140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.740149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.749996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.750050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.750060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.750065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.750070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.750080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.760174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.760221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.760231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.760236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.760241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.760251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.770170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.770222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.770238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.770243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.770247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.770257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.780216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.780254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.780265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.780270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.780274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.780284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.790194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.790236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.790246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.790252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.790256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.790266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.800333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.800382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.800393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.800397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.800402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.800411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.810286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.810334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.810345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.810350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.957 [2024-06-11 12:27:05.810354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.957 [2024-06-11 12:27:05.810367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.957 qpair failed and we were unable to recover it. 00:32:52.957 [2024-06-11 12:27:05.820338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.957 [2024-06-11 12:27:05.820382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.957 [2024-06-11 12:27:05.820392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.957 [2024-06-11 12:27:05.820397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.820402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.820411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.830204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.830244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.830255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.830259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.830264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.830273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.840423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.840470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.840480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.840485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.840489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.840499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.850375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.850462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.850472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.850477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.850482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.850493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.860436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.860481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.860494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.860499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.860503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.860514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.870444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.870494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.870505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.870510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.870515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.870525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.880485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.880547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.880558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.880563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.880567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.880577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.890509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.890557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.890567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.890572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.890577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.890587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.900528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.900567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.900578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.900583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.900590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.900600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.910550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.910589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.910600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.910605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.910610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.910619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.920632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.920678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.920689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.920694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.920698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.920708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.930620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.930669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.930680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.930685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.930689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.930699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.940638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.940676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.940686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.940692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.940696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.940706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.958 [2024-06-11 12:27:05.950529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.958 [2024-06-11 12:27:05.950577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.958 [2024-06-11 12:27:05.950588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.958 [2024-06-11 12:27:05.950593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.958 [2024-06-11 12:27:05.950598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.958 [2024-06-11 12:27:05.950608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.958 qpair failed and we were unable to recover it. 00:32:52.959 [2024-06-11 12:27:05.960599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.959 [2024-06-11 12:27:05.960653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.959 [2024-06-11 12:27:05.960663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.959 [2024-06-11 12:27:05.960668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.959 [2024-06-11 12:27:05.960673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.959 [2024-06-11 12:27:05.960683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-06-11 12:27:05.970704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.959 [2024-06-11 12:27:05.970751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.959 [2024-06-11 12:27:05.970762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.959 [2024-06-11 12:27:05.970767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.959 [2024-06-11 12:27:05.970771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.959 [2024-06-11 12:27:05.970781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.959 qpair failed and we were unable to recover it. 00:32:52.959 [2024-06-11 12:27:05.980737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.959 [2024-06-11 12:27:05.980823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.959 [2024-06-11 12:27:05.980842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.959 [2024-06-11 12:27:05.980848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.959 [2024-06-11 12:27:05.980853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:52.959 [2024-06-11 12:27:05.980866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:52.959 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:05.990772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:05.990831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:05.990850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:05.990856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:05.990865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:05.990878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.000853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.000928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.000946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.000951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.000956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.000970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.010828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.010880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.010892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.010897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.010902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.010913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.020814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.020854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.020865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.020870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.020874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.020885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.030874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.030917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.030927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.030933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.030937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.030947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.040813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.040860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.040871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.040876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.040880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.040891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.050929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.050976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.050987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.050992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.050997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.051007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.060962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.061007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.061021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.061027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.061031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.061041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.070948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.070991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.071001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.071006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.071011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.071024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.081055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.081124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.081135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.081142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.081147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.081157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.091052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.091117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.091128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.091133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.091138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.091148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.100952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.101000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.101011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.101016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.221 [2024-06-11 12:27:06.101024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.221 [2024-06-11 12:27:06.101034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.221 qpair failed and we were unable to recover it. 00:32:53.221 [2024-06-11 12:27:06.111111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.221 [2024-06-11 12:27:06.111150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.221 [2024-06-11 12:27:06.111161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.221 [2024-06-11 12:27:06.111166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.111171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.111181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.121172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.121221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.121231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.121237] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.121241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.121251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.131158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.131205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.131216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.131221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.131225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.131235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.141168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.141208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.141219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.141224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.141228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.141238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.151209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.151282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.151292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.151297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.151302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.151312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.161283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.161331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.161341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.161346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.161351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.161360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.171253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.171300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.171310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.171318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.171322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.171332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.181307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.181353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.181363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.181368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.181372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.181382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.191301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.191361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.191372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.191376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.191381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.191391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.201273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.201327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.201337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.201342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.201347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.201357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.211229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.211275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.211285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.211290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.211295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.211305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.221423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.221506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.221517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.221522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.221527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.221537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.231430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.231470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.231481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.231486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.231490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.222 [2024-06-11 12:27:06.231500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.222 qpair failed and we were unable to recover it. 00:32:53.222 [2024-06-11 12:27:06.241539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.222 [2024-06-11 12:27:06.241594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.222 [2024-06-11 12:27:06.241604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.222 [2024-06-11 12:27:06.241609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.222 [2024-06-11 12:27:06.241613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.223 [2024-06-11 12:27:06.241623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.223 qpair failed and we were unable to recover it. 00:32:53.223 [2024-06-11 12:27:06.251483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.223 [2024-06-11 12:27:06.251531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.223 [2024-06-11 12:27:06.251542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.223 [2024-06-11 12:27:06.251546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.223 [2024-06-11 12:27:06.251551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.223 [2024-06-11 12:27:06.251561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.223 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.261372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.261415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.261430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.261436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.261440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.261451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.271407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.271454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.271465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.271470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.271475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.271485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.281596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.281646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.281657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.281662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.281666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.281676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.291580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.291628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.291639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.291643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.291648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.291657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.301645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.301688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.301699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.301704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.301709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.301721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.311669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.311715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.311726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.311731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.311735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.311745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.321772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.321819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.321830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.321835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.321839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.321849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.331740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.331789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.331800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.331805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.331809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.331819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.341727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.341764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.341775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.341780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.341784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.341794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.351712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.351784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.351797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.351802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.351807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.351817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.361724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.361770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.361781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.361786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.361790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.485 [2024-06-11 12:27:06.361800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.485 qpair failed and we were unable to recover it. 00:32:53.485 [2024-06-11 12:27:06.371803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.485 [2024-06-11 12:27:06.371850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.485 [2024-06-11 12:27:06.371860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.485 [2024-06-11 12:27:06.371865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.485 [2024-06-11 12:27:06.371869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.371879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.381818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.381863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.381882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.381887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.381892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.381905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.391855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.391900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.391912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.391917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.391922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.391935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.401903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.401949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.401961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.401966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.401971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.401981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.411917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.411962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.411973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.411978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.411983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.411992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.421788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.421831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.421842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.421847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.421851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.421861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.431967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.432013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.432028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.432034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.432038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.432048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.442047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.442105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.442116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.442121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.442125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.442135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.452047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.452092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.452103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.452107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.452112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.452122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.462055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.462100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.462111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.462115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.462120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.462129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.471941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.471980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.471991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.471997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.472001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.472011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.482003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.482052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.482063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.482068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.482076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.482086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.492108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.492204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.492215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.492220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.492224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.492234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.486 [2024-06-11 12:27:06.502112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.486 [2024-06-11 12:27:06.502154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.486 [2024-06-11 12:27:06.502165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.486 [2024-06-11 12:27:06.502170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.486 [2024-06-11 12:27:06.502174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.486 [2024-06-11 12:27:06.502184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.486 qpair failed and we were unable to recover it. 00:32:53.487 [2024-06-11 12:27:06.512182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.487 [2024-06-11 12:27:06.512227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.487 [2024-06-11 12:27:06.512238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.487 [2024-06-11 12:27:06.512243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.487 [2024-06-11 12:27:06.512247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.487 [2024-06-11 12:27:06.512257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.487 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.522238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.522286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.522297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.522302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.522307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.522316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.532214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.532273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.532284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.532289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.532293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.532303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.542268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.542312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.542323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.542328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.542332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.542342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.552186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.552226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.552237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.552242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.552246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.552256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.562220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.562266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.562276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.562281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.562286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.562295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.572349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.572396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.572407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.572418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.572422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.572432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.582247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.582299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.582310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.582315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.582319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.582329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.592264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.592308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.592319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.592324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.592328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.592339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.602473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.602526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.602537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.602542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.602547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.602556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.612450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.612493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.612504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.612509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.612513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.612523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.622479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.622524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.622535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.622540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.622544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.622554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.632509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.632556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.632567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.632571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.632576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.749 [2024-06-11 12:27:06.632586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.749 qpair failed and we were unable to recover it. 00:32:53.749 [2024-06-11 12:27:06.642571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.749 [2024-06-11 12:27:06.642635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.749 [2024-06-11 12:27:06.642645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.749 [2024-06-11 12:27:06.642650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.749 [2024-06-11 12:27:06.642655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.642665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.652559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.652640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.652651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.652656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.652660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.652670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.662584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.662677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.662688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.662695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.662700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.662709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.672472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.672516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.672526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.672532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.672536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.672546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.682656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.682708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.682719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.682725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.682729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.682738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.692710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.692785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.692795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.692800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.692804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.692814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.702686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.702726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.702737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.702742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.702746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.702756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.712715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.712758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.712768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.712773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.712777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.712787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.722820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.722867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.722878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.722883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.722887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.722896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.732775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.732827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.732837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.732842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.732846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.732856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.742794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.742844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.742854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.742859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.742863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.742873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.752839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.752912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.752925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.752930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.752935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.752945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.762839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.762884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.762895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.762900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.762904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.762914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:53.750 [2024-06-11 12:27:06.772882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.750 [2024-06-11 12:27:06.772925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.750 [2024-06-11 12:27:06.772935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.750 [2024-06-11 12:27:06.772940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.750 [2024-06-11 12:27:06.772944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:53.750 [2024-06-11 12:27:06.772954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:53.750 qpair failed and we were unable to recover it. 00:32:54.012 [2024-06-11 12:27:06.782791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.012 [2024-06-11 12:27:06.782840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.012 [2024-06-11 12:27:06.782851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.012 [2024-06-11 12:27:06.782856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.012 [2024-06-11 12:27:06.782860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.012 [2024-06-11 12:27:06.782870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.012 qpair failed and we were unable to recover it. 00:32:54.012 [2024-06-11 12:27:06.792963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.012 [2024-06-11 12:27:06.793001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.012 [2024-06-11 12:27:06.793012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.012 [2024-06-11 12:27:06.793020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.012 [2024-06-11 12:27:06.793025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.012 [2024-06-11 12:27:06.793038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.012 qpair failed and we were unable to recover it. 00:32:54.012 [2024-06-11 12:27:06.802875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.012 [2024-06-11 12:27:06.802920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.012 [2024-06-11 12:27:06.802931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.012 [2024-06-11 12:27:06.802936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.012 [2024-06-11 12:27:06.802940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.012 [2024-06-11 12:27:06.802950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.012 qpair failed and we were unable to recover it. 00:32:54.012 [2024-06-11 12:27:06.812991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.012 [2024-06-11 12:27:06.813040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.012 [2024-06-11 12:27:06.813051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.012 [2024-06-11 12:27:06.813056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.012 [2024-06-11 12:27:06.813060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.012 [2024-06-11 12:27:06.813070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.012 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.823067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.823102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.823113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.823118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.823122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.823133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.832917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.832955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.832966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.832971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.832975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.832985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.843088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.843124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.843138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.843143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.843147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.843157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.853069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.853115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.853126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.853131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.853135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.853145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.862993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.863031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.863042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.863047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.863051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.863061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.873043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.873100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.873111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.873116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.873121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.873130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.883186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.883228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.883239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.883244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.883248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.883261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.893218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.893262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.893273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.893278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.893282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.893292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.013 [2024-06-11 12:27:06.903324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.013 [2024-06-11 12:27:06.903403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.013 [2024-06-11 12:27:06.903414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.013 [2024-06-11 12:27:06.903419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.013 [2024-06-11 12:27:06.903423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.013 [2024-06-11 12:27:06.903433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.013 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.913252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.913288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.913299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.913304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.913308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.913318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.923337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.923378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.923389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.923393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.923398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.923407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.933295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.933337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.933351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.933356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.933360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.933370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.943369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.943403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.943414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.943419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.943423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.943433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.953254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.953297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.953308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.953312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.953317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.953327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.963380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.963421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.963431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.963436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.963440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.963450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.973447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.973524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.973535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.973540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.973547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.973557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.983469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.983508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.983519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.983524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.983529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.983538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:06.993472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:06.993513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:06.993523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:06.993528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:06.993533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:06.993542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:07.003420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:07.003463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:07.003473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:07.003478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:07.003483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:07.003493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:07.013524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:07.013596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:07.013607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:07.013612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:07.013616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:07.013626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:07.023563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:07.023648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:07.023659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:07.023664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:07.023668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:07.023678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:07.033589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:07.033631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.014 [2024-06-11 12:27:07.033641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.014 [2024-06-11 12:27:07.033646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.014 [2024-06-11 12:27:07.033651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.014 [2024-06-11 12:27:07.033660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.014 qpair failed and we were unable to recover it. 00:32:54.014 [2024-06-11 12:27:07.043508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.014 [2024-06-11 12:27:07.043548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.015 [2024-06-11 12:27:07.043559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.015 [2024-06-11 12:27:07.043563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.015 [2024-06-11 12:27:07.043568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.015 [2024-06-11 12:27:07.043578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.015 qpair failed and we were unable to recover it. 00:32:54.277 [2024-06-11 12:27:07.053559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.277 [2024-06-11 12:27:07.053604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.277 [2024-06-11 12:27:07.053615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.277 [2024-06-11 12:27:07.053620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.277 [2024-06-11 12:27:07.053625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.277 [2024-06-11 12:27:07.053635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.277 qpair failed and we were unable to recover it. 00:32:54.277 [2024-06-11 12:27:07.063686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.277 [2024-06-11 12:27:07.063725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.277 [2024-06-11 12:27:07.063736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.277 [2024-06-11 12:27:07.063741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.277 [2024-06-11 12:27:07.063748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.277 [2024-06-11 12:27:07.063758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.277 qpair failed and we were unable to recover it. 00:32:54.277 [2024-06-11 12:27:07.073692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.277 [2024-06-11 12:27:07.073733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.277 [2024-06-11 12:27:07.073744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.277 [2024-06-11 12:27:07.073749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.277 [2024-06-11 12:27:07.073753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.277 [2024-06-11 12:27:07.073763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.277 qpair failed and we were unable to recover it. 00:32:54.277 [2024-06-11 12:27:07.083743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.277 [2024-06-11 12:27:07.083782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.277 [2024-06-11 12:27:07.083793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.277 [2024-06-11 12:27:07.083798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.277 [2024-06-11 12:27:07.083803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.277 [2024-06-11 12:27:07.083812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.277 qpair failed and we were unable to recover it. 00:32:54.277 [2024-06-11 12:27:07.093770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.277 [2024-06-11 12:27:07.093813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.277 [2024-06-11 12:27:07.093824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.277 [2024-06-11 12:27:07.093829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.277 [2024-06-11 12:27:07.093833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd358000b90 00:32:54.277 [2024-06-11 12:27:07.093843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:54.277 qpair failed and we were unable to recover it. 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Write completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 Read completed with error (sct=0, sc=8) 00:32:54.277 starting I/O failed 00:32:54.277 [2024-06-11 12:27:07.094729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:54.277 [2024-06-11 12:27:07.103795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.277 [2024-06-11 12:27:07.103896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.277 [2024-06-11 12:27:07.103946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.277 [2024-06-11 12:27:07.103967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.277 [2024-06-11 12:27:07.103986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd360000b90 00:32:54.277 [2024-06-11 12:27:07.104041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:54.277 qpair failed and we were unable to recover it. 00:32:54.277 [2024-06-11 12:27:07.113769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.277 [2024-06-11 12:27:07.113843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.277 [2024-06-11 12:27:07.113874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.277 [2024-06-11 12:27:07.113889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.278 [2024-06-11 12:27:07.113903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd360000b90 00:32:54.278 [2024-06-11 12:27:07.113943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:54.278 qpair failed and we were unable to recover it. 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 [2024-06-11 12:27:07.114897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:54.278 [2024-06-11 12:27:07.123866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.278 [2024-06-11 12:27:07.123953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.278 [2024-06-11 12:27:07.124001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.278 [2024-06-11 12:27:07.124032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.278 [2024-06-11 12:27:07.124053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd350000b90 00:32:54.278 [2024-06-11 12:27:07.124099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:54.278 qpair failed and we were unable to recover it. 00:32:54.278 [2024-06-11 12:27:07.133913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.278 [2024-06-11 12:27:07.133986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.278 [2024-06-11 12:27:07.134014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.278 [2024-06-11 12:27:07.134036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.278 [2024-06-11 12:27:07.134049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd350000b90 00:32:54.278 [2024-06-11 12:27:07.134077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:54.278 qpair failed and we were unable to recover it. 00:32:54.278 [2024-06-11 12:27:07.134471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e08a0 is same with the state(5) to be set 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Read completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 Write completed with error (sct=0, sc=8) 00:32:54.278 starting I/O failed 00:32:54.278 [2024-06-11 12:27:07.134837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.278 [2024-06-11 12:27:07.143906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.278 [2024-06-11 12:27:07.144006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.278 [2024-06-11 12:27:07.144036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.278 [2024-06-11 12:27:07.144045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.278 [2024-06-11 12:27:07.144051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d2db0 00:32:54.278 [2024-06-11 12:27:07.144069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.278 qpair failed and we were unable to recover it. 00:32:54.278 [2024-06-11 12:27:07.153822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.278 [2024-06-11 12:27:07.153911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.278 [2024-06-11 12:27:07.153936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.278 [2024-06-11 12:27:07.153944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.278 [2024-06-11 12:27:07.153950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d2db0 00:32:54.278 [2024-06-11 12:27:07.153969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.278 qpair failed and we were unable to recover it. 00:32:54.278 [2024-06-11 12:27:07.154375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e08a0 (9): Bad file descriptor 00:32:54.278 Initializing NVMe Controllers 00:32:54.278 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:54.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:54.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:54.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:54.278 Initialization complete. Launching workers. 00:32:54.278 Starting thread on core 1 00:32:54.278 Starting thread on core 2 00:32:54.278 Starting thread on core 3 00:32:54.278 Starting thread on core 0 00:32:54.278 12:27:07 -- host/target_disconnect.sh@59 -- # sync 00:32:54.278 00:32:54.278 real 0m11.220s 00:32:54.278 user 0m21.655s 00:32:54.278 sys 0m3.375s 00:32:54.279 12:27:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:54.279 12:27:07 -- common/autotest_common.sh@10 -- # set +x 00:32:54.279 ************************************ 00:32:54.279 END TEST nvmf_target_disconnect_tc2 00:32:54.279 ************************************ 00:32:54.279 12:27:07 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:54.279 12:27:07 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:54.279 12:27:07 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:54.279 12:27:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:54.279 12:27:07 -- nvmf/common.sh@116 -- # sync 00:32:54.279 12:27:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:54.279 12:27:07 -- nvmf/common.sh@119 -- # set +e 00:32:54.279 12:27:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:54.279 12:27:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:54.279 rmmod nvme_tcp 00:32:54.279 rmmod nvme_fabrics 00:32:54.279 rmmod nvme_keyring 00:32:54.279 12:27:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:54.279 12:27:07 -- nvmf/common.sh@123 -- # set -e 00:32:54.279 12:27:07 -- nvmf/common.sh@124 -- # return 0 00:32:54.279 12:27:07 -- nvmf/common.sh@477 -- # '[' -n 1705925 ']' 00:32:54.279 12:27:07 -- nvmf/common.sh@478 -- # killprocess 1705925 00:32:54.279 12:27:07 -- common/autotest_common.sh@926 -- # '[' -z 1705925 ']' 00:32:54.279 12:27:07 -- common/autotest_common.sh@930 -- # kill -0 1705925 00:32:54.279 12:27:07 -- common/autotest_common.sh@931 -- # uname 00:32:54.279 12:27:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:54.279 12:27:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1705925 00:32:54.539 12:27:07 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:32:54.539 12:27:07 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:32:54.539 12:27:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1705925' 00:32:54.539 killing process with pid 1705925 00:32:54.539 12:27:07 -- common/autotest_common.sh@945 -- # kill 1705925 00:32:54.539 12:27:07 -- common/autotest_common.sh@950 -- # wait 1705925 00:32:54.539 12:27:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:54.539 12:27:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:54.539 12:27:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:54.539 12:27:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:54.539 12:27:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:54.539 12:27:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.539 12:27:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:54.539 12:27:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.081 12:27:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:57.081 00:32:57.081 real 0m20.877s 00:32:57.081 user 0m49.067s 00:32:57.081 sys 0m8.841s 00:32:57.081 12:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:57.081 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:57.081 ************************************ 00:32:57.081 END TEST nvmf_target_disconnect 00:32:57.081 ************************************ 00:32:57.081 12:27:09 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:32:57.081 12:27:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:57.081 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:57.081 12:27:09 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:32:57.081 00:32:57.081 real 25m45.710s 00:32:57.081 user 68m28.523s 00:32:57.081 sys 7m4.096s 00:32:57.081 12:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:57.081 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:57.081 ************************************ 00:32:57.081 END TEST nvmf_tcp 00:32:57.081 ************************************ 00:32:57.081 12:27:09 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:32:57.081 12:27:09 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:57.081 12:27:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:57.081 12:27:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:57.081 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:57.081 ************************************ 00:32:57.081 START TEST spdkcli_nvmf_tcp 00:32:57.081 ************************************ 00:32:57.081 12:27:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:57.081 * Looking for test storage... 00:32:57.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:57.082 12:27:09 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:57.082 12:27:09 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:57.082 12:27:09 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:57.082 12:27:09 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.082 12:27:09 -- nvmf/common.sh@7 -- # uname -s 00:32:57.082 12:27:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.082 12:27:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.082 12:27:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.082 12:27:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.082 12:27:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.082 12:27:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.082 12:27:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.082 12:27:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.082 12:27:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.082 12:27:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.082 12:27:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:57.082 12:27:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:57.082 12:27:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.082 12:27:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.082 12:27:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.082 12:27:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.082 12:27:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.082 12:27:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.082 12:27:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.082 12:27:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.082 12:27:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.082 12:27:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.082 12:27:09 -- paths/export.sh@5 -- # export PATH 00:32:57.082 12:27:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.082 12:27:09 -- nvmf/common.sh@46 -- # : 0 00:32:57.082 12:27:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:57.082 12:27:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:57.082 12:27:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:57.082 12:27:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.082 12:27:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.082 12:27:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:57.082 12:27:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:57.082 12:27:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:57.082 12:27:09 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:57.082 12:27:09 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:57.082 12:27:09 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:57.082 12:27:09 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:57.082 12:27:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:57.082 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:57.082 12:27:09 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:57.082 12:27:09 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1708156 00:32:57.082 12:27:09 -- spdkcli/common.sh@34 -- # waitforlisten 1708156 00:32:57.082 12:27:09 -- common/autotest_common.sh@819 -- # '[' -z 1708156 ']' 00:32:57.082 12:27:09 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:57.082 12:27:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.082 12:27:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:57.082 12:27:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.082 12:27:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:57.082 12:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:57.082 [2024-06-11 12:27:09.848663] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:57.082 [2024-06-11 12:27:09.848735] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708156 ] 00:32:57.082 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.082 [2024-06-11 12:27:09.916214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:57.082 [2024-06-11 12:27:09.953520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:57.082 [2024-06-11 12:27:09.953829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.082 [2024-06-11 12:27:09.953830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.652 12:27:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:57.653 12:27:10 -- common/autotest_common.sh@852 -- # return 0 00:32:57.653 12:27:10 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:57.653 12:27:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:57.653 12:27:10 -- common/autotest_common.sh@10 -- # set +x 00:32:57.653 12:27:10 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:57.653 12:27:10 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:57.653 12:27:10 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:57.653 12:27:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:57.653 12:27:10 -- common/autotest_common.sh@10 -- # set +x 00:32:57.653 12:27:10 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:57.653 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:57.653 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:57.653 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:57.653 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:57.653 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:57.653 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:57.653 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:57.653 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:57.653 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:57.653 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:57.653 ' 00:32:58.222 [2024-06-11 12:27:10.970639] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:00.134 [2024-06-11 12:27:12.975842] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.517 [2024-06-11 12:27:14.139755] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:03.430 [2024-06-11 12:27:16.274194] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:05.345 [2024-06-11 12:27:18.107827] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:06.726 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:06.726 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:06.726 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:06.726 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:06.726 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:06.726 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:06.726 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:06.726 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:06.726 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:06.726 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:06.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:06.726 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:06.726 12:27:19 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:06.726 12:27:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:06.726 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:33:06.726 12:27:19 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:06.726 12:27:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:06.726 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:33:06.726 12:27:19 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:06.726 12:27:19 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:06.986 12:27:20 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:07.246 12:27:20 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:07.246 12:27:20 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:07.246 12:27:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:07.246 12:27:20 -- common/autotest_common.sh@10 -- # set +x 00:33:07.246 12:27:20 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:07.246 12:27:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:07.246 12:27:20 -- common/autotest_common.sh@10 -- # set +x 00:33:07.246 12:27:20 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:07.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:07.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:07.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:07.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:07.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:07.246 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:07.246 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:07.246 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:07.246 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:07.246 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:07.246 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:07.246 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:07.246 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:07.246 ' 00:33:12.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:12.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:12.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:12.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:12.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:12.529 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:12.529 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:12.529 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:12.529 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:12.529 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:12.529 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:12.529 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:12.529 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:12.529 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:12.529 12:27:24 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:12.529 12:27:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:12.529 12:27:24 -- common/autotest_common.sh@10 -- # set +x 00:33:12.529 12:27:24 -- spdkcli/nvmf.sh@90 -- # killprocess 1708156 00:33:12.529 12:27:24 -- common/autotest_common.sh@926 -- # '[' -z 1708156 ']' 00:33:12.529 12:27:24 -- common/autotest_common.sh@930 -- # kill -0 1708156 00:33:12.529 12:27:24 -- common/autotest_common.sh@931 -- # uname 00:33:12.529 12:27:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:12.529 12:27:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1708156 00:33:12.529 12:27:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:12.529 12:27:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:12.529 12:27:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1708156' 00:33:12.529 killing process with pid 1708156 00:33:12.529 12:27:25 -- common/autotest_common.sh@945 -- # kill 1708156 00:33:12.529 [2024-06-11 12:27:25.051224] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:12.529 12:27:25 -- common/autotest_common.sh@950 -- # wait 1708156 00:33:12.529 12:27:25 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:12.529 12:27:25 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:12.529 12:27:25 -- spdkcli/common.sh@13 -- # '[' -n 1708156 ']' 00:33:12.529 12:27:25 -- spdkcli/common.sh@14 -- # killprocess 1708156 00:33:12.529 12:27:25 -- common/autotest_common.sh@926 -- # '[' -z 1708156 ']' 00:33:12.529 12:27:25 -- common/autotest_common.sh@930 -- # kill -0 1708156 00:33:12.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1708156) - No such process 00:33:12.529 12:27:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1708156 is not found' 00:33:12.529 Process with pid 1708156 is not found 00:33:12.529 12:27:25 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:12.529 12:27:25 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:12.529 12:27:25 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:12.529 00:33:12.529 real 0m15.507s 00:33:12.529 user 0m31.892s 00:33:12.529 sys 0m0.709s 00:33:12.529 12:27:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:12.529 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:33:12.529 ************************************ 00:33:12.529 END TEST spdkcli_nvmf_tcp 00:33:12.529 ************************************ 00:33:12.529 12:27:25 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:12.529 12:27:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:12.529 12:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:12.529 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:33:12.529 ************************************ 00:33:12.529 START TEST nvmf_identify_passthru 00:33:12.530 ************************************ 00:33:12.530 12:27:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:12.530 * Looking for test storage... 00:33:12.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.530 12:27:25 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.530 12:27:25 -- nvmf/common.sh@7 -- # uname -s 00:33:12.530 12:27:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.530 12:27:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.530 12:27:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.530 12:27:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.530 12:27:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.530 12:27:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.530 12:27:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.530 12:27:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.530 12:27:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.530 12:27:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.530 12:27:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:12.530 12:27:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:12.530 12:27:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.530 12:27:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.530 12:27:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.530 12:27:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.530 12:27:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.530 12:27:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.530 12:27:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.530 12:27:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- paths/export.sh@5 -- # export PATH 00:33:12.530 12:27:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- nvmf/common.sh@46 -- # : 0 00:33:12.530 12:27:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:12.530 12:27:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:12.530 12:27:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:12.530 12:27:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.530 12:27:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.530 12:27:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:12.530 12:27:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:12.530 12:27:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:12.530 12:27:25 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.530 12:27:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.530 12:27:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.530 12:27:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.530 12:27:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- paths/export.sh@5 -- # export PATH 00:33:12.530 12:27:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.530 12:27:25 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:12.530 12:27:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:12.530 12:27:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.530 12:27:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:12.530 12:27:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:12.530 12:27:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:12.530 12:27:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.530 12:27:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:12.530 12:27:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.530 12:27:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:12.530 12:27:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:12.530 12:27:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:12.530 12:27:25 -- common/autotest_common.sh@10 -- # set +x 00:33:20.669 12:27:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:20.669 12:27:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:20.669 12:27:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:20.669 12:27:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:20.669 12:27:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:20.669 12:27:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:20.669 12:27:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:20.669 12:27:32 -- nvmf/common.sh@294 -- # net_devs=() 00:33:20.669 12:27:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:20.669 12:27:32 -- nvmf/common.sh@295 -- # e810=() 00:33:20.669 12:27:32 -- nvmf/common.sh@295 -- # local -ga e810 00:33:20.669 12:27:32 -- nvmf/common.sh@296 -- # x722=() 00:33:20.669 12:27:32 -- nvmf/common.sh@296 -- # local -ga x722 00:33:20.669 12:27:32 -- nvmf/common.sh@297 -- # mlx=() 00:33:20.669 12:27:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:20.669 12:27:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.669 12:27:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:20.669 12:27:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:20.669 12:27:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:20.669 12:27:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:20.669 12:27:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:20.669 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:20.669 12:27:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:20.669 12:27:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:20.669 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:20.669 12:27:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:20.669 12:27:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:20.669 12:27:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.669 12:27:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:20.669 12:27:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.669 12:27:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:20.669 Found net devices under 0000:31:00.0: cvl_0_0 00:33:20.669 12:27:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.669 12:27:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:20.669 12:27:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.669 12:27:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:20.669 12:27:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.669 12:27:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:20.669 Found net devices under 0000:31:00.1: cvl_0_1 00:33:20.669 12:27:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.669 12:27:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:20.669 12:27:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:20.669 12:27:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:20.669 12:27:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:20.669 12:27:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.669 12:27:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.669 12:27:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.669 12:27:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:20.669 12:27:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.669 12:27:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.669 12:27:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:20.669 12:27:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.669 12:27:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.669 12:27:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:20.669 12:27:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:20.669 12:27:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.669 12:27:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.669 12:27:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.669 12:27:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.669 12:27:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:20.669 12:27:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.669 12:27:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.669 12:27:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.669 12:27:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:20.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:33:20.669 00:33:20.670 --- 10.0.0.2 ping statistics --- 00:33:20.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.670 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:33:20.670 12:27:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:33:20.670 00:33:20.670 --- 10.0.0.1 ping statistics --- 00:33:20.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.670 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:33:20.670 12:27:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.670 12:27:32 -- nvmf/common.sh@410 -- # return 0 00:33:20.670 12:27:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:20.670 12:27:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.670 12:27:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:20.670 12:27:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:20.670 12:27:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.670 12:27:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:20.670 12:27:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:20.670 12:27:32 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:20.670 12:27:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:20.670 12:27:32 -- common/autotest_common.sh@10 -- # set +x 00:33:20.670 12:27:32 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:20.670 12:27:32 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:20.670 12:27:32 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:20.670 12:27:32 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:20.670 12:27:32 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:20.670 12:27:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:20.670 12:27:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:20.670 12:27:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:20.670 12:27:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:20.670 12:27:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:20.670 12:27:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:20.670 12:27:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:33:20.670 12:27:32 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:33:20.670 12:27:32 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:33:20.670 12:27:32 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:33:20.670 12:27:32 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:20.670 12:27:32 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:20.670 12:27:32 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:20.670 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.670 12:27:33 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:33:20.670 12:27:33 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:20.670 12:27:33 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:20.670 12:27:33 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:20.670 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.670 12:27:33 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:33:20.670 12:27:33 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:20.670 12:27:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:20.670 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:33:20.670 12:27:33 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:20.670 12:27:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:20.670 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:33:20.670 12:27:33 -- target/identify_passthru.sh@31 -- # nvmfpid=1715137 00:33:20.670 12:27:33 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:20.670 12:27:33 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:20.670 12:27:33 -- target/identify_passthru.sh@35 -- # waitforlisten 1715137 00:33:20.670 12:27:33 -- common/autotest_common.sh@819 -- # '[' -z 1715137 ']' 00:33:20.670 12:27:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.670 12:27:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:20.670 12:27:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.670 12:27:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:20.670 12:27:33 -- common/autotest_common.sh@10 -- # set +x 00:33:20.930 [2024-06-11 12:27:33.721635] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:20.931 [2024-06-11 12:27:33.721685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.931 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.931 [2024-06-11 12:27:33.788251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:20.931 [2024-06-11 12:27:33.817662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:20.931 [2024-06-11 12:27:33.817793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.931 [2024-06-11 12:27:33.817802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.931 [2024-06-11 12:27:33.817810] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.931 [2024-06-11 12:27:33.817998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.931 [2024-06-11 12:27:33.818011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.931 [2024-06-11 12:27:33.818756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:20.931 [2024-06-11 12:27:33.818870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.502 12:27:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:21.502 12:27:34 -- common/autotest_common.sh@852 -- # return 0 00:33:21.502 12:27:34 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:21.502 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.502 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:21.502 INFO: Log level set to 20 00:33:21.502 INFO: Requests: 00:33:21.502 { 00:33:21.502 "jsonrpc": "2.0", 00:33:21.502 "method": "nvmf_set_config", 00:33:21.502 "id": 1, 00:33:21.502 "params": { 00:33:21.502 "admin_cmd_passthru": { 00:33:21.502 "identify_ctrlr": true 00:33:21.502 } 00:33:21.502 } 00:33:21.502 } 00:33:21.502 00:33:21.502 INFO: response: 00:33:21.502 { 00:33:21.502 "jsonrpc": "2.0", 00:33:21.502 "id": 1, 00:33:21.502 "result": true 00:33:21.502 } 00:33:21.502 00:33:21.502 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.502 12:27:34 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:21.502 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.502 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:21.502 INFO: Setting log level to 20 00:33:21.502 INFO: Setting log level to 20 00:33:21.502 INFO: Log level set to 20 00:33:21.502 INFO: Log level set to 20 00:33:21.502 INFO: Requests: 00:33:21.502 { 00:33:21.502 "jsonrpc": "2.0", 00:33:21.502 "method": "framework_start_init", 00:33:21.502 "id": 1 00:33:21.502 } 00:33:21.502 00:33:21.502 INFO: Requests: 00:33:21.502 { 00:33:21.502 "jsonrpc": "2.0", 00:33:21.502 "method": "framework_start_init", 00:33:21.502 "id": 1 00:33:21.502 } 00:33:21.502 00:33:21.763 [2024-06-11 12:27:34.559760] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:21.763 INFO: response: 00:33:21.763 { 00:33:21.763 "jsonrpc": "2.0", 00:33:21.763 "id": 1, 00:33:21.763 "result": true 00:33:21.763 } 00:33:21.763 00:33:21.763 INFO: response: 00:33:21.763 { 00:33:21.763 "jsonrpc": "2.0", 00:33:21.763 "id": 1, 00:33:21.763 "result": true 00:33:21.763 } 00:33:21.763 00:33:21.763 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.763 12:27:34 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:21.763 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.763 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:21.763 INFO: Setting log level to 40 00:33:21.763 INFO: Setting log level to 40 00:33:21.763 INFO: Setting log level to 40 00:33:21.763 [2024-06-11 12:27:34.572986] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.763 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.763 12:27:34 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:21.763 12:27:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:21.763 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:21.763 12:27:34 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:33:21.763 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.763 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:22.024 Nvme0n1 00:33:22.024 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.024 12:27:34 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:22.024 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.024 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:22.024 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.024 12:27:34 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:22.024 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.024 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:22.024 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.024 12:27:34 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.024 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.024 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:22.024 [2024-06-11 12:27:34.956324] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.024 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.024 12:27:34 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:22.024 12:27:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.024 12:27:34 -- common/autotest_common.sh@10 -- # set +x 00:33:22.024 [2024-06-11 12:27:34.968137] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:22.024 [ 00:33:22.024 { 00:33:22.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:22.024 "subtype": "Discovery", 00:33:22.024 "listen_addresses": [], 00:33:22.024 "allow_any_host": true, 00:33:22.024 "hosts": [] 00:33:22.024 }, 00:33:22.024 { 00:33:22.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:22.024 "subtype": "NVMe", 00:33:22.024 "listen_addresses": [ 00:33:22.024 { 00:33:22.024 "transport": "TCP", 00:33:22.024 "trtype": "TCP", 00:33:22.024 "adrfam": "IPv4", 00:33:22.024 "traddr": "10.0.0.2", 00:33:22.024 "trsvcid": "4420" 00:33:22.024 } 00:33:22.024 ], 00:33:22.024 "allow_any_host": true, 00:33:22.024 "hosts": [], 00:33:22.024 "serial_number": "SPDK00000000000001", 00:33:22.024 "model_number": "SPDK bdev Controller", 00:33:22.024 "max_namespaces": 1, 00:33:22.024 "min_cntlid": 1, 00:33:22.024 "max_cntlid": 65519, 00:33:22.024 "namespaces": [ 00:33:22.024 { 00:33:22.024 "nsid": 1, 00:33:22.024 "bdev_name": "Nvme0n1", 00:33:22.024 "name": "Nvme0n1", 00:33:22.024 "nguid": "36344730526054940025384500000027", 00:33:22.024 "uuid": "36344730-5260-5494-0025-384500000027" 00:33:22.024 } 00:33:22.024 ] 00:33:22.024 } 00:33:22.024 ] 00:33:22.024 12:27:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.024 12:27:34 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:22.024 12:27:34 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:22.024 12:27:34 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:22.024 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.285 12:27:35 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:33:22.285 12:27:35 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:22.285 12:27:35 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:22.286 12:27:35 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:22.286 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.286 12:27:35 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:33:22.286 12:27:35 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:33:22.286 12:27:35 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:33:22.286 12:27:35 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:22.286 12:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.286 12:27:35 -- common/autotest_common.sh@10 -- # set +x 00:33:22.286 12:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.286 12:27:35 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:22.286 12:27:35 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:22.286 12:27:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:22.286 12:27:35 -- nvmf/common.sh@116 -- # sync 00:33:22.286 12:27:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:22.286 12:27:35 -- nvmf/common.sh@119 -- # set +e 00:33:22.286 12:27:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:22.286 12:27:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:22.286 rmmod nvme_tcp 00:33:22.546 rmmod nvme_fabrics 00:33:22.546 rmmod nvme_keyring 00:33:22.546 12:27:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:22.546 12:27:35 -- nvmf/common.sh@123 -- # set -e 00:33:22.546 12:27:35 -- nvmf/common.sh@124 -- # return 0 00:33:22.546 12:27:35 -- nvmf/common.sh@477 -- # '[' -n 1715137 ']' 00:33:22.546 12:27:35 -- nvmf/common.sh@478 -- # killprocess 1715137 00:33:22.546 12:27:35 -- common/autotest_common.sh@926 -- # '[' -z 1715137 ']' 00:33:22.546 12:27:35 -- common/autotest_common.sh@930 -- # kill -0 1715137 00:33:22.546 12:27:35 -- common/autotest_common.sh@931 -- # uname 00:33:22.546 12:27:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:22.546 12:27:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1715137 00:33:22.546 12:27:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:22.546 12:27:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:22.546 12:27:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1715137' 00:33:22.546 killing process with pid 1715137 00:33:22.546 12:27:35 -- common/autotest_common.sh@945 -- # kill 1715137 00:33:22.546 [2024-06-11 12:27:35.424832] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:22.546 12:27:35 -- common/autotest_common.sh@950 -- # wait 1715137 00:33:22.806 12:27:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:22.806 12:27:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:22.806 12:27:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:22.806 12:27:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:22.806 12:27:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:22.806 12:27:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.806 12:27:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:22.806 12:27:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.718 12:27:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:24.718 00:33:24.718 real 0m12.522s 00:33:24.718 user 0m9.775s 00:33:24.718 sys 0m5.969s 00:33:24.718 12:27:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.718 12:27:37 -- common/autotest_common.sh@10 -- # set +x 00:33:24.718 ************************************ 00:33:24.718 END TEST nvmf_identify_passthru 00:33:24.718 ************************************ 00:33:24.979 12:27:37 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:24.979 12:27:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:24.979 12:27:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:24.979 12:27:37 -- common/autotest_common.sh@10 -- # set +x 00:33:24.979 ************************************ 00:33:24.979 START TEST nvmf_dif 00:33:24.979 ************************************ 00:33:24.979 12:27:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:24.979 * Looking for test storage... 00:33:24.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:24.979 12:27:37 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.979 12:27:37 -- nvmf/common.sh@7 -- # uname -s 00:33:24.979 12:27:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.979 12:27:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.979 12:27:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.979 12:27:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.979 12:27:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.979 12:27:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.979 12:27:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.979 12:27:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.979 12:27:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.979 12:27:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.979 12:27:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:24.979 12:27:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:24.979 12:27:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.979 12:27:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.979 12:27:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.979 12:27:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.979 12:27:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.979 12:27:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.979 12:27:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.979 12:27:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.979 12:27:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.979 12:27:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.979 12:27:37 -- paths/export.sh@5 -- # export PATH 00:33:24.979 12:27:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.979 12:27:37 -- nvmf/common.sh@46 -- # : 0 00:33:24.979 12:27:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:24.979 12:27:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:24.979 12:27:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:24.980 12:27:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.980 12:27:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.980 12:27:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:24.980 12:27:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:24.980 12:27:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:24.980 12:27:37 -- target/dif.sh@15 -- # NULL_META=16 00:33:24.980 12:27:37 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:24.980 12:27:37 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:24.980 12:27:37 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:24.980 12:27:37 -- target/dif.sh@135 -- # nvmftestinit 00:33:24.980 12:27:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:24.980 12:27:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.980 12:27:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:24.980 12:27:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:24.980 12:27:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:24.980 12:27:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.980 12:27:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.980 12:27:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.980 12:27:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:24.980 12:27:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:24.980 12:27:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:24.980 12:27:37 -- common/autotest_common.sh@10 -- # set +x 00:33:31.570 12:27:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:31.570 12:27:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:31.570 12:27:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:31.570 12:27:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:31.570 12:27:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:31.570 12:27:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:31.570 12:27:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:31.570 12:27:44 -- nvmf/common.sh@294 -- # net_devs=() 00:33:31.570 12:27:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:31.570 12:27:44 -- nvmf/common.sh@295 -- # e810=() 00:33:31.570 12:27:44 -- nvmf/common.sh@295 -- # local -ga e810 00:33:31.570 12:27:44 -- nvmf/common.sh@296 -- # x722=() 00:33:31.570 12:27:44 -- nvmf/common.sh@296 -- # local -ga x722 00:33:31.570 12:27:44 -- nvmf/common.sh@297 -- # mlx=() 00:33:31.570 12:27:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:31.570 12:27:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.570 12:27:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:31.570 12:27:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:31.570 12:27:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:31.570 12:27:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:31.570 12:27:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:31.570 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:31.570 12:27:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:31.570 12:27:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:31.570 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:31.570 12:27:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:31.570 12:27:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:31.570 12:27:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.570 12:27:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:31.570 12:27:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.570 12:27:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:31.570 Found net devices under 0000:31:00.0: cvl_0_0 00:33:31.570 12:27:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.570 12:27:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:31.570 12:27:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.570 12:27:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:31.570 12:27:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.570 12:27:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:31.570 Found net devices under 0000:31:00.1: cvl_0_1 00:33:31.570 12:27:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.570 12:27:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:31.570 12:27:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:31.570 12:27:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:31.570 12:27:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:31.570 12:27:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.570 12:27:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.570 12:27:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.570 12:27:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:31.570 12:27:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.570 12:27:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.570 12:27:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:31.570 12:27:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.570 12:27:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.570 12:27:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:31.831 12:27:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:31.831 12:27:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.831 12:27:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.831 12:27:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.831 12:27:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.831 12:27:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:31.831 12:27:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.831 12:27:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.092 12:27:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.092 12:27:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:32.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:33:32.092 00:33:32.092 --- 10.0.0.2 ping statistics --- 00:33:32.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.092 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:33:32.092 12:27:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:33:32.092 00:33:32.092 --- 10.0.0.1 ping statistics --- 00:33:32.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.092 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:33:32.092 12:27:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.092 12:27:44 -- nvmf/common.sh@410 -- # return 0 00:33:32.092 12:27:44 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:32.092 12:27:44 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:35.393 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:33:35.393 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:35.393 12:27:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.393 12:27:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:35.393 12:27:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:35.393 12:27:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.393 12:27:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:35.393 12:27:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:35.393 12:27:48 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:35.393 12:27:48 -- target/dif.sh@137 -- # nvmfappstart 00:33:35.393 12:27:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:35.393 12:27:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:35.393 12:27:48 -- common/autotest_common.sh@10 -- # set +x 00:33:35.393 12:27:48 -- nvmf/common.sh@469 -- # nvmfpid=1721112 00:33:35.393 12:27:48 -- nvmf/common.sh@470 -- # waitforlisten 1721112 00:33:35.393 12:27:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:35.393 12:27:48 -- common/autotest_common.sh@819 -- # '[' -z 1721112 ']' 00:33:35.393 12:27:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.393 12:27:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:35.393 12:27:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.393 12:27:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:35.393 12:27:48 -- common/autotest_common.sh@10 -- # set +x 00:33:35.653 [2024-06-11 12:27:48.439416] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:35.653 [2024-06-11 12:27:48.439470] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.653 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.653 [2024-06-11 12:27:48.508096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.653 [2024-06-11 12:27:48.543261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:35.653 [2024-06-11 12:27:48.543384] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.653 [2024-06-11 12:27:48.543392] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.653 [2024-06-11 12:27:48.543400] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.653 [2024-06-11 12:27:48.543419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.223 12:27:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:36.223 12:27:49 -- common/autotest_common.sh@852 -- # return 0 00:33:36.223 12:27:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:36.223 12:27:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:36.223 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.223 12:27:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.223 12:27:49 -- target/dif.sh@139 -- # create_transport 00:33:36.223 12:27:49 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:36.223 12:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.223 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.223 [2024-06-11 12:27:49.245026] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.223 12:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.223 12:27:49 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:36.223 12:27:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:36.223 12:27:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:36.223 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.223 ************************************ 00:33:36.223 START TEST fio_dif_1_default 00:33:36.223 ************************************ 00:33:36.484 12:27:49 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:33:36.484 12:27:49 -- target/dif.sh@86 -- # create_subsystems 0 00:33:36.484 12:27:49 -- target/dif.sh@28 -- # local sub 00:33:36.484 12:27:49 -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.484 12:27:49 -- target/dif.sh@31 -- # create_subsystem 0 00:33:36.484 12:27:49 -- target/dif.sh@18 -- # local sub_id=0 00:33:36.484 12:27:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:36.484 12:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.484 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.484 bdev_null0 00:33:36.484 12:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.484 12:27:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:36.484 12:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.484 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.484 12:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.484 12:27:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:36.484 12:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.484 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.484 12:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.484 12:27:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.484 12:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.484 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:33:36.484 [2024-06-11 12:27:49.301299] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.484 12:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.484 12:27:49 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:36.484 12:27:49 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:36.484 12:27:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:36.484 12:27:49 -- nvmf/common.sh@520 -- # config=() 00:33:36.484 12:27:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.484 12:27:49 -- nvmf/common.sh@520 -- # local subsystem config 00:33:36.484 12:27:49 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.484 12:27:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:36.484 12:27:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:36.484 { 00:33:36.484 "params": { 00:33:36.484 "name": "Nvme$subsystem", 00:33:36.484 "trtype": "$TEST_TRANSPORT", 00:33:36.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.484 "adrfam": "ipv4", 00:33:36.484 "trsvcid": "$NVMF_PORT", 00:33:36.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.484 "hdgst": ${hdgst:-false}, 00:33:36.484 "ddgst": ${ddgst:-false} 00:33:36.484 }, 00:33:36.484 "method": "bdev_nvme_attach_controller" 00:33:36.484 } 00:33:36.484 EOF 00:33:36.484 )") 00:33:36.484 12:27:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:36.484 12:27:49 -- target/dif.sh@82 -- # gen_fio_conf 00:33:36.484 12:27:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.484 12:27:49 -- target/dif.sh@54 -- # local file 00:33:36.484 12:27:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:36.484 12:27:49 -- target/dif.sh@56 -- # cat 00:33:36.484 12:27:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.484 12:27:49 -- common/autotest_common.sh@1320 -- # shift 00:33:36.484 12:27:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:36.484 12:27:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.484 12:27:49 -- nvmf/common.sh@542 -- # cat 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.484 12:27:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:36.484 12:27:49 -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:36.484 12:27:49 -- nvmf/common.sh@544 -- # jq . 00:33:36.484 12:27:49 -- nvmf/common.sh@545 -- # IFS=, 00:33:36.484 12:27:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:36.484 "params": { 00:33:36.484 "name": "Nvme0", 00:33:36.484 "trtype": "tcp", 00:33:36.484 "traddr": "10.0.0.2", 00:33:36.484 "adrfam": "ipv4", 00:33:36.484 "trsvcid": "4420", 00:33:36.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.484 "hdgst": false, 00:33:36.484 "ddgst": false 00:33:36.484 }, 00:33:36.484 "method": "bdev_nvme_attach_controller" 00:33:36.484 }' 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:36.484 12:27:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:36.484 12:27:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:36.484 12:27:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:36.484 12:27:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:36.484 12:27:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:36.484 12:27:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.747 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:36.747 fio-3.35 00:33:36.747 Starting 1 thread 00:33:36.747 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.381 [2024-06-11 12:27:50.188856] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:37.381 [2024-06-11 12:27:50.188911] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:47.375 00:33:47.375 filename0: (groupid=0, jobs=1): err= 0: pid=1721650: Tue Jun 11 12:28:00 2024 00:33:47.375 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:33:47.375 slat (nsec): min=5349, max=55603, avg=6273.45, stdev=2198.99 00:33:47.375 clat (usec): min=40847, max=44077, avg=41045.06, stdev=301.24 00:33:47.375 lat (usec): min=40855, max=44112, avg=41051.33, stdev=301.73 00:33:47.375 clat percentiles (usec): 00:33:47.375 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:47.375 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:47.375 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:47.375 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:33:47.375 | 99.99th=[44303] 00:33:47.375 bw ( KiB/s): min= 384, max= 416, per=99.58%, avg=388.80, stdev=11.72, samples=20 00:33:47.375 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:47.375 lat (msec) : 50=100.00% 00:33:47.375 cpu : usr=96.05%, sys=3.72%, ctx=21, majf=0, minf=268 00:33:47.375 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.375 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.375 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:47.375 00:33:47.375 Run status group 0 (all jobs): 00:33:47.375 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10020-10020msec 00:33:47.635 12:28:00 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:47.635 12:28:00 -- target/dif.sh@43 -- # local sub 00:33:47.635 12:28:00 -- target/dif.sh@45 -- # for sub in "$@" 00:33:47.635 12:28:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:47.635 12:28:00 -- target/dif.sh@36 -- # local sub_id=0 00:33:47.635 12:28:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.635 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.635 12:28:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:47.635 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.635 00:33:47.635 real 0m11.242s 00:33:47.635 user 0m25.064s 00:33:47.635 sys 0m0.699s 00:33:47.635 12:28:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 ************************************ 00:33:47.635 END TEST fio_dif_1_default 00:33:47.635 ************************************ 00:33:47.635 12:28:00 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:47.635 12:28:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:47.635 12:28:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 ************************************ 00:33:47.635 START TEST fio_dif_1_multi_subsystems 00:33:47.635 ************************************ 00:33:47.635 12:28:00 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:33:47.635 12:28:00 -- target/dif.sh@92 -- # local files=1 00:33:47.635 12:28:00 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:47.635 12:28:00 -- target/dif.sh@28 -- # local sub 00:33:47.635 12:28:00 -- target/dif.sh@30 -- # for sub in "$@" 00:33:47.635 12:28:00 -- target/dif.sh@31 -- # create_subsystem 0 00:33:47.635 12:28:00 -- target/dif.sh@18 -- # local sub_id=0 00:33:47.635 12:28:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:47.635 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 bdev_null0 00:33:47.635 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.635 12:28:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:47.635 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.635 12:28:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:47.635 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.635 12:28:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.635 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 [2024-06-11 12:28:00.587491] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.635 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.635 12:28:00 -- target/dif.sh@30 -- # for sub in "$@" 00:33:47.635 12:28:00 -- target/dif.sh@31 -- # create_subsystem 1 00:33:47.635 12:28:00 -- target/dif.sh@18 -- # local sub_id=1 00:33:47.635 12:28:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:47.635 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.635 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.635 bdev_null1 00:33:47.635 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.636 12:28:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:47.636 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.636 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.636 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.636 12:28:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:47.636 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.636 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.636 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.636 12:28:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.636 12:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:47.636 12:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:47.636 12:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.636 12:28:00 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:47.636 12:28:00 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:47.636 12:28:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:47.636 12:28:00 -- nvmf/common.sh@520 -- # config=() 00:33:47.636 12:28:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.636 12:28:00 -- nvmf/common.sh@520 -- # local subsystem config 00:33:47.636 12:28:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:47.636 12:28:00 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.636 12:28:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:47.636 { 00:33:47.636 "params": { 00:33:47.636 "name": "Nvme$subsystem", 00:33:47.636 "trtype": "$TEST_TRANSPORT", 00:33:47.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.636 "adrfam": "ipv4", 00:33:47.636 "trsvcid": "$NVMF_PORT", 00:33:47.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.636 "hdgst": ${hdgst:-false}, 00:33:47.636 "ddgst": ${ddgst:-false} 00:33:47.636 }, 00:33:47.636 "method": "bdev_nvme_attach_controller" 00:33:47.636 } 00:33:47.636 EOF 00:33:47.636 )") 00:33:47.636 12:28:00 -- target/dif.sh@82 -- # gen_fio_conf 00:33:47.636 12:28:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:47.636 12:28:00 -- target/dif.sh@54 -- # local file 00:33:47.636 12:28:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:47.636 12:28:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:47.636 12:28:00 -- target/dif.sh@56 -- # cat 00:33:47.636 12:28:00 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.636 12:28:00 -- common/autotest_common.sh@1320 -- # shift 00:33:47.636 12:28:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:47.636 12:28:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.636 12:28:00 -- nvmf/common.sh@542 -- # cat 00:33:47.636 12:28:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.636 12:28:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:47.636 12:28:00 -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.636 12:28:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:47.636 12:28:00 -- target/dif.sh@73 -- # cat 00:33:47.636 12:28:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:47.636 12:28:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:47.636 12:28:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:47.636 { 00:33:47.636 "params": { 00:33:47.636 "name": "Nvme$subsystem", 00:33:47.636 "trtype": "$TEST_TRANSPORT", 00:33:47.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.636 "adrfam": "ipv4", 00:33:47.636 "trsvcid": "$NVMF_PORT", 00:33:47.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.636 "hdgst": ${hdgst:-false}, 00:33:47.636 "ddgst": ${ddgst:-false} 00:33:47.636 }, 00:33:47.636 "method": "bdev_nvme_attach_controller" 00:33:47.636 } 00:33:47.636 EOF 00:33:47.636 )") 00:33:47.636 12:28:00 -- target/dif.sh@72 -- # (( file++ )) 00:33:47.636 12:28:00 -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.636 12:28:00 -- nvmf/common.sh@542 -- # cat 00:33:47.636 12:28:00 -- nvmf/common.sh@544 -- # jq . 00:33:47.636 12:28:00 -- nvmf/common.sh@545 -- # IFS=, 00:33:47.636 12:28:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:47.636 "params": { 00:33:47.636 "name": "Nvme0", 00:33:47.636 "trtype": "tcp", 00:33:47.636 "traddr": "10.0.0.2", 00:33:47.636 "adrfam": "ipv4", 00:33:47.636 "trsvcid": "4420", 00:33:47.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:47.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:47.636 "hdgst": false, 00:33:47.636 "ddgst": false 00:33:47.636 }, 00:33:47.636 "method": "bdev_nvme_attach_controller" 00:33:47.636 },{ 00:33:47.636 "params": { 00:33:47.636 "name": "Nvme1", 00:33:47.636 "trtype": "tcp", 00:33:47.636 "traddr": "10.0.0.2", 00:33:47.636 "adrfam": "ipv4", 00:33:47.636 "trsvcid": "4420", 00:33:47.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:47.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:47.636 "hdgst": false, 00:33:47.636 "ddgst": false 00:33:47.636 }, 00:33:47.636 "method": "bdev_nvme_attach_controller" 00:33:47.636 }' 00:33:47.916 12:28:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:47.916 12:28:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:47.916 12:28:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.916 12:28:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.916 12:28:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:47.916 12:28:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:47.916 12:28:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:47.916 12:28:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:47.916 12:28:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:47.916 12:28:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.182 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:48.182 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:48.182 fio-3.35 00:33:48.182 Starting 2 threads 00:33:48.182 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.751 [2024-06-11 12:28:01.635595] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:48.751 [2024-06-11 12:28:01.635640] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:00.975 00:34:00.975 filename0: (groupid=0, jobs=1): err= 0: pid=1724188: Tue Jun 11 12:28:11 2024 00:34:00.975 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10032msec) 00:34:00.975 slat (nsec): min=5383, max=36253, avg=6549.97, stdev=1603.17 00:34:00.975 clat (usec): min=534, max=42933, avg=21146.17, stdev=20195.76 00:34:00.975 lat (usec): min=542, max=42969, avg=21152.72, stdev=20195.59 00:34:00.976 clat percentiles (usec): 00:34:00.976 | 1.00th=[ 635], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 898], 00:34:00.976 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[40633], 60.00th=[41157], 00:34:00.976 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:34:00.976 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:00.976 | 99.99th=[42730] 00:34:00.976 bw ( KiB/s): min= 672, max= 768, per=50.05%, avg=756.80, stdev=28.00, samples=20 00:34:00.976 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:34:00.976 lat (usec) : 750=10.07%, 1000=38.87% 00:34:00.976 lat (msec) : 2=0.84%, 50=50.21% 00:34:00.976 cpu : usr=97.98%, sys=1.79%, ctx=16, majf=0, minf=144 00:34:00.976 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.976 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.976 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:00.976 filename1: (groupid=0, jobs=1): err= 0: pid=1724189: Tue Jun 11 12:28:11 2024 00:34:00.976 read: IOPS=188, BW=756KiB/s (774kB/s)(7568KiB/10017msec) 00:34:00.976 slat (nsec): min=5378, max=54306, avg=6486.05, stdev=1957.89 00:34:00.976 clat (usec): min=640, max=43018, avg=21159.53, stdev=20130.34 00:34:00.976 lat (usec): min=648, max=43024, avg=21166.01, stdev=20130.15 00:34:00.976 clat percentiles (usec): 00:34:00.976 | 1.00th=[ 693], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 914], 00:34:00.976 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[41157], 60.00th=[41157], 00:34:00.976 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:00.976 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:00.976 | 99.99th=[43254] 00:34:00.976 bw ( KiB/s): min= 640, max= 768, per=49.99%, avg=755.20, stdev=33.48, samples=20 00:34:00.976 iops : min= 160, max= 192, avg=188.80, stdev= 8.37, samples=20 00:34:00.976 lat (usec) : 750=3.17%, 1000=46.19% 00:34:00.976 lat (msec) : 2=0.32%, 50=50.32% 00:34:00.976 cpu : usr=98.14%, sys=1.66%, ctx=11, majf=0, minf=223 00:34:00.976 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.976 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.976 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:00.976 00:34:00.976 Run status group 0 (all jobs): 00:34:00.976 READ: bw=1510KiB/s (1547kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=14.8MiB (15.5MB), run=10017-10032msec 00:34:00.976 12:28:11 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:00.976 12:28:11 -- target/dif.sh@43 -- # local sub 00:34:00.976 12:28:11 -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.976 12:28:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:00.976 12:28:11 -- target/dif.sh@36 -- # local sub_id=0 00:34:00.976 12:28:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:00.976 12:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 12:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 12:28:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:00.976 12:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 12:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 12:28:11 -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.976 12:28:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:00.976 12:28:11 -- target/dif.sh@36 -- # local sub_id=1 00:34:00.976 12:28:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:00.976 12:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 12:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 12:28:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:00.976 12:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 12:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 00:34:00.976 real 0m11.385s 00:34:00.976 user 0m35.917s 00:34:00.976 sys 0m0.709s 00:34:00.976 12:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 ************************************ 00:34:00.976 END TEST fio_dif_1_multi_subsystems 00:34:00.976 ************************************ 00:34:00.976 12:28:11 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:00.976 12:28:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:00.976 12:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 ************************************ 00:34:00.976 START TEST fio_dif_rand_params 00:34:00.976 ************************************ 00:34:00.976 12:28:11 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:00.976 12:28:11 -- target/dif.sh@100 -- # local NULL_DIF 00:34:00.976 12:28:11 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:00.976 12:28:11 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:00.976 12:28:11 -- target/dif.sh@103 -- # bs=128k 00:34:00.976 12:28:11 -- target/dif.sh@103 -- # numjobs=3 00:34:00.976 12:28:11 -- target/dif.sh@103 -- # iodepth=3 00:34:00.976 12:28:11 -- target/dif.sh@103 -- # runtime=5 00:34:00.976 12:28:11 -- target/dif.sh@105 -- # create_subsystems 0 00:34:00.976 12:28:11 -- target/dif.sh@28 -- # local sub 00:34:00.976 12:28:11 -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.976 12:28:11 -- target/dif.sh@31 -- # create_subsystem 0 00:34:00.976 12:28:11 -- target/dif.sh@18 -- # local sub_id=0 00:34:00.976 12:28:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:00.976 12:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 bdev_null0 00:34:00.976 12:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 12:28:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:00.976 12:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:11 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 12:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 12:28:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:00.976 12:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:12 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 12:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 12:28:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:00.976 12:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.976 12:28:12 -- common/autotest_common.sh@10 -- # set +x 00:34:00.976 [2024-06-11 12:28:12.018163] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.976 12:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.976 12:28:12 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:00.976 12:28:12 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:00.976 12:28:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:00.976 12:28:12 -- nvmf/common.sh@520 -- # config=() 00:34:00.976 12:28:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.976 12:28:12 -- nvmf/common.sh@520 -- # local subsystem config 00:34:00.976 12:28:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.976 12:28:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:00.976 12:28:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:00.976 { 00:34:00.976 "params": { 00:34:00.976 "name": "Nvme$subsystem", 00:34:00.976 "trtype": "$TEST_TRANSPORT", 00:34:00.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.976 "adrfam": "ipv4", 00:34:00.976 "trsvcid": "$NVMF_PORT", 00:34:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.976 "hdgst": ${hdgst:-false}, 00:34:00.976 "ddgst": ${ddgst:-false} 00:34:00.976 }, 00:34:00.976 "method": "bdev_nvme_attach_controller" 00:34:00.976 } 00:34:00.976 EOF 00:34:00.976 )") 00:34:00.976 12:28:12 -- target/dif.sh@82 -- # gen_fio_conf 00:34:00.976 12:28:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:00.976 12:28:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:00.976 12:28:12 -- target/dif.sh@54 -- # local file 00:34:00.976 12:28:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:00.976 12:28:12 -- target/dif.sh@56 -- # cat 00:34:00.976 12:28:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.976 12:28:12 -- common/autotest_common.sh@1320 -- # shift 00:34:00.976 12:28:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:00.976 12:28:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.976 12:28:12 -- nvmf/common.sh@542 -- # cat 00:34:00.976 12:28:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.976 12:28:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:00.976 12:28:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:00.976 12:28:12 -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.976 12:28:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:00.976 12:28:12 -- nvmf/common.sh@544 -- # jq . 00:34:00.976 12:28:12 -- nvmf/common.sh@545 -- # IFS=, 00:34:00.976 12:28:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:00.976 "params": { 00:34:00.976 "name": "Nvme0", 00:34:00.976 "trtype": "tcp", 00:34:00.976 "traddr": "10.0.0.2", 00:34:00.976 "adrfam": "ipv4", 00:34:00.976 "trsvcid": "4420", 00:34:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.976 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:00.976 "hdgst": false, 00:34:00.976 "ddgst": false 00:34:00.977 }, 00:34:00.977 "method": "bdev_nvme_attach_controller" 00:34:00.977 }' 00:34:00.977 12:28:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:00.977 12:28:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:00.977 12:28:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.977 12:28:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.977 12:28:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:00.977 12:28:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:00.977 12:28:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:00.977 12:28:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:00.977 12:28:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:00.977 12:28:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.977 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:00.977 ... 00:34:00.977 fio-3.35 00:34:00.977 Starting 3 threads 00:34:00.977 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.977 [2024-06-11 12:28:12.733282] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:00.977 [2024-06-11 12:28:12.733330] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:05.174 00:34:05.174 filename0: (groupid=0, jobs=1): err= 0: pid=1726405: Tue Jun 11 12:28:17 2024 00:34:05.174 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(131MiB/5001msec) 00:34:05.174 slat (nsec): min=5395, max=63651, avg=7233.56, stdev=2576.02 00:34:05.174 clat (usec): min=4409, max=91959, avg=14264.97, stdev=14816.98 00:34:05.174 lat (usec): min=4414, max=91964, avg=14272.20, stdev=14817.04 00:34:05.174 clat percentiles (usec): 00:34:05.174 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7439], 00:34:05.174 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[10028], 00:34:05.174 | 70.00th=[10814], 80.00th=[11469], 90.00th=[47449], 95.00th=[49546], 00:34:05.174 | 99.00th=[53216], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:34:05.174 | 99.99th=[91751] 00:34:05.174 bw ( KiB/s): min=15872, max=42496, per=35.51%, avg=26880.00, stdev=8336.72, samples=9 00:34:05.174 iops : min= 124, max= 332, avg=210.00, stdev=65.13, samples=9 00:34:05.174 lat (msec) : 10=58.80%, 20=28.64%, 50=8.37%, 100=4.19% 00:34:05.174 cpu : usr=95.94%, sys=3.78%, ctx=10, majf=0, minf=139 00:34:05.174 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.174 issued rwts: total=1051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.174 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:05.174 filename0: (groupid=0, jobs=1): err= 0: pid=1726406: Tue Jun 11 12:28:17 2024 00:34:05.174 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(119MiB/5030msec) 00:34:05.174 slat (nsec): min=5379, max=34925, avg=7087.88, stdev=1695.61 00:34:05.174 clat (usec): min=4828, max=90232, avg=15822.35, stdev=15640.22 00:34:05.174 lat (usec): min=4834, max=90239, avg=15829.44, stdev=15640.42 00:34:05.174 clat percentiles (usec): 00:34:05.174 | 1.00th=[ 5342], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 7832], 00:34:05.174 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10945], 00:34:05.174 | 70.00th=[11600], 80.00th=[12387], 90.00th=[48497], 95.00th=[50594], 00:34:05.174 | 99.00th=[53740], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:34:05.174 | 99.99th=[90702] 00:34:05.174 bw ( KiB/s): min=12032, max=34304, per=32.12%, avg=24320.00, stdev=6664.75, samples=10 00:34:05.174 iops : min= 94, max= 268, avg=190.00, stdev=52.07, samples=10 00:34:05.174 lat (msec) : 10=50.26%, 20=34.42%, 50=9.44%, 100=5.88% 00:34:05.174 cpu : usr=96.24%, sys=3.44%, ctx=14, majf=0, minf=114 00:34:05.174 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.174 issued rwts: total=953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.174 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:05.174 filename0: (groupid=0, jobs=1): err= 0: pid=1726407: Tue Jun 11 12:28:17 2024 00:34:05.174 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(121MiB/5028msec) 00:34:05.174 slat (nsec): min=5360, max=30054, avg=7185.82, stdev=1789.46 00:34:05.174 clat (usec): min=4750, max=92708, avg=15522.12, stdev=15073.56 00:34:05.174 lat (usec): min=4758, max=92714, avg=15529.30, stdev=15073.41 00:34:05.174 clat percentiles (usec): 00:34:05.174 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7439], 00:34:05.174 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10945], 00:34:05.174 | 70.00th=[11994], 80.00th=[13042], 90.00th=[47973], 95.00th=[51119], 00:34:05.174 | 99.00th=[53740], 99.50th=[54264], 99.90th=[92799], 99.95th=[92799], 00:34:05.174 | 99.99th=[92799] 00:34:05.174 bw ( KiB/s): min=18432, max=33024, per=32.74%, avg=24786.30, stdev=5182.82, samples=10 00:34:05.174 iops : min= 144, max= 258, avg=193.60, stdev=40.46, samples=10 00:34:05.174 lat (msec) : 10=51.70%, 20=33.57%, 50=8.14%, 100=6.59% 00:34:05.174 cpu : usr=96.20%, sys=3.48%, ctx=13, majf=0, minf=125 00:34:05.174 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:05.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.174 issued rwts: total=971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.174 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:05.174 00:34:05.174 Run status group 0 (all jobs): 00:34:05.174 READ: bw=73.9MiB/s (77.5MB/s), 23.7MiB/s-26.3MiB/s (24.8MB/s-27.5MB/s), io=372MiB (390MB), run=5001-5030msec 00:34:05.174 12:28:18 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:05.174 12:28:18 -- target/dif.sh@43 -- # local sub 00:34:05.174 12:28:18 -- target/dif.sh@45 -- # for sub in "$@" 00:34:05.174 12:28:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:05.174 12:28:18 -- target/dif.sh@36 -- # local sub_id=0 00:34:05.174 12:28:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:05.174 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.174 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.174 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.174 12:28:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:05.174 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.174 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.174 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.174 12:28:18 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:05.174 12:28:18 -- target/dif.sh@109 -- # bs=4k 00:34:05.174 12:28:18 -- target/dif.sh@109 -- # numjobs=8 00:34:05.174 12:28:18 -- target/dif.sh@109 -- # iodepth=16 00:34:05.174 12:28:18 -- target/dif.sh@109 -- # runtime= 00:34:05.174 12:28:18 -- target/dif.sh@109 -- # files=2 00:34:05.174 12:28:18 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:05.174 12:28:18 -- target/dif.sh@28 -- # local sub 00:34:05.174 12:28:18 -- target/dif.sh@30 -- # for sub in "$@" 00:34:05.174 12:28:18 -- target/dif.sh@31 -- # create_subsystem 0 00:34:05.174 12:28:18 -- target/dif.sh@18 -- # local sub_id=0 00:34:05.174 12:28:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:05.174 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.174 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.174 bdev_null0 00:34:05.174 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.174 12:28:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:05.174 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.174 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.174 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 [2024-06-11 12:28:18.075250] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@30 -- # for sub in "$@" 00:34:05.175 12:28:18 -- target/dif.sh@31 -- # create_subsystem 1 00:34:05.175 12:28:18 -- target/dif.sh@18 -- # local sub_id=1 00:34:05.175 12:28:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 bdev_null1 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@30 -- # for sub in "$@" 00:34:05.175 12:28:18 -- target/dif.sh@31 -- # create_subsystem 2 00:34:05.175 12:28:18 -- target/dif.sh@18 -- # local sub_id=2 00:34:05.175 12:28:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 bdev_null2 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:05.175 12:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.175 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:34:05.175 12:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.175 12:28:18 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:05.175 12:28:18 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:05.175 12:28:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:05.175 12:28:18 -- nvmf/common.sh@520 -- # config=() 00:34:05.175 12:28:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:05.175 12:28:18 -- nvmf/common.sh@520 -- # local subsystem config 00:34:05.175 12:28:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:05.175 12:28:18 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:05.175 12:28:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:05.175 { 00:34:05.175 "params": { 00:34:05.175 "name": "Nvme$subsystem", 00:34:05.175 "trtype": "$TEST_TRANSPORT", 00:34:05.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.175 "adrfam": "ipv4", 00:34:05.175 "trsvcid": "$NVMF_PORT", 00:34:05.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.175 "hdgst": ${hdgst:-false}, 00:34:05.175 "ddgst": ${ddgst:-false} 00:34:05.175 }, 00:34:05.175 "method": "bdev_nvme_attach_controller" 00:34:05.175 } 00:34:05.175 EOF 00:34:05.175 )") 00:34:05.175 12:28:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:05.175 12:28:18 -- target/dif.sh@82 -- # gen_fio_conf 00:34:05.175 12:28:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:05.175 12:28:18 -- target/dif.sh@54 -- # local file 00:34:05.175 12:28:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:05.175 12:28:18 -- target/dif.sh@56 -- # cat 00:34:05.175 12:28:18 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:05.175 12:28:18 -- common/autotest_common.sh@1320 -- # shift 00:34:05.175 12:28:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:05.175 12:28:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:05.175 12:28:18 -- nvmf/common.sh@542 -- # cat 00:34:05.175 12:28:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:05.175 12:28:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:05.175 12:28:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:05.175 12:28:18 -- target/dif.sh@72 -- # (( file <= files )) 00:34:05.175 12:28:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:05.175 12:28:18 -- target/dif.sh@73 -- # cat 00:34:05.175 12:28:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:05.175 12:28:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:05.175 { 00:34:05.175 "params": { 00:34:05.175 "name": "Nvme$subsystem", 00:34:05.175 "trtype": "$TEST_TRANSPORT", 00:34:05.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.175 "adrfam": "ipv4", 00:34:05.175 "trsvcid": "$NVMF_PORT", 00:34:05.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.175 "hdgst": ${hdgst:-false}, 00:34:05.175 "ddgst": ${ddgst:-false} 00:34:05.175 }, 00:34:05.175 "method": "bdev_nvme_attach_controller" 00:34:05.175 } 00:34:05.175 EOF 00:34:05.175 )") 00:34:05.175 12:28:18 -- target/dif.sh@72 -- # (( file++ )) 00:34:05.175 12:28:18 -- target/dif.sh@72 -- # (( file <= files )) 00:34:05.175 12:28:18 -- nvmf/common.sh@542 -- # cat 00:34:05.175 12:28:18 -- target/dif.sh@73 -- # cat 00:34:05.175 12:28:18 -- target/dif.sh@72 -- # (( file++ )) 00:34:05.175 12:28:18 -- target/dif.sh@72 -- # (( file <= files )) 00:34:05.175 12:28:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:05.175 12:28:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:05.175 { 00:34:05.175 "params": { 00:34:05.175 "name": "Nvme$subsystem", 00:34:05.175 "trtype": "$TEST_TRANSPORT", 00:34:05.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.175 "adrfam": "ipv4", 00:34:05.175 "trsvcid": "$NVMF_PORT", 00:34:05.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.175 "hdgst": ${hdgst:-false}, 00:34:05.175 "ddgst": ${ddgst:-false} 00:34:05.175 }, 00:34:05.175 "method": "bdev_nvme_attach_controller" 00:34:05.175 } 00:34:05.175 EOF 00:34:05.175 )") 00:34:05.175 12:28:18 -- nvmf/common.sh@542 -- # cat 00:34:05.175 12:28:18 -- nvmf/common.sh@544 -- # jq . 00:34:05.175 12:28:18 -- nvmf/common.sh@545 -- # IFS=, 00:34:05.175 12:28:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:05.175 "params": { 00:34:05.175 "name": "Nvme0", 00:34:05.175 "trtype": "tcp", 00:34:05.175 "traddr": "10.0.0.2", 00:34:05.175 "adrfam": "ipv4", 00:34:05.175 "trsvcid": "4420", 00:34:05.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:05.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:05.175 "hdgst": false, 00:34:05.175 "ddgst": false 00:34:05.175 }, 00:34:05.175 "method": "bdev_nvme_attach_controller" 00:34:05.175 },{ 00:34:05.175 "params": { 00:34:05.175 "name": "Nvme1", 00:34:05.175 "trtype": "tcp", 00:34:05.175 "traddr": "10.0.0.2", 00:34:05.175 "adrfam": "ipv4", 00:34:05.175 "trsvcid": "4420", 00:34:05.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:05.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:05.175 "hdgst": false, 00:34:05.175 "ddgst": false 00:34:05.175 }, 00:34:05.175 "method": "bdev_nvme_attach_controller" 00:34:05.175 },{ 00:34:05.175 "params": { 00:34:05.175 "name": "Nvme2", 00:34:05.175 "trtype": "tcp", 00:34:05.175 "traddr": "10.0.0.2", 00:34:05.175 "adrfam": "ipv4", 00:34:05.175 "trsvcid": "4420", 00:34:05.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:05.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:05.175 "hdgst": false, 00:34:05.175 "ddgst": false 00:34:05.175 }, 00:34:05.175 "method": "bdev_nvme_attach_controller" 00:34:05.175 }' 00:34:05.466 12:28:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:05.466 12:28:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:05.466 12:28:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:05.466 12:28:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:05.466 12:28:18 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:05.466 12:28:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:05.466 12:28:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:05.466 12:28:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:05.466 12:28:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:05.466 12:28:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:05.736 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:05.736 ... 00:34:05.736 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:05.736 ... 00:34:05.736 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:05.736 ... 00:34:05.736 fio-3.35 00:34:05.736 Starting 24 threads 00:34:05.736 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.306 [2024-06-11 12:28:19.324335] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:06.306 [2024-06-11 12:28:19.324385] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:18.524 00:34:18.524 filename0: (groupid=0, jobs=1): err= 0: pid=1727927: Tue Jun 11 12:28:29 2024 00:34:18.524 read: IOPS=527, BW=2109KiB/s (2159kB/s)(20.6MiB/10015msec) 00:34:18.524 slat (nsec): min=5427, max=81226, avg=8823.57, stdev=4222.67 00:34:18.524 clat (usec): min=3733, max=32719, avg=30270.04, stdev=2800.54 00:34:18.524 lat (usec): min=3751, max=32729, avg=30278.86, stdev=2799.47 00:34:18.524 clat percentiles (usec): 00:34:18.524 | 1.00th=[ 9372], 5.00th=[30278], 10.00th=[30278], 20.00th=[30540], 00:34:18.524 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.524 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:34:18.524 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:34:18.524 | 99.99th=[32637] 00:34:18.524 bw ( KiB/s): min= 2043, max= 2432, per=4.19%, avg=2105.10, stdev=97.15, samples=20 00:34:18.524 iops : min= 510, max= 608, avg=526.20, stdev=24.29, samples=20 00:34:18.524 lat (msec) : 4=0.17%, 10=1.04%, 20=0.61%, 50=98.18% 00:34:18.524 cpu : usr=99.19%, sys=0.53%, ctx=11, majf=0, minf=73 00:34:18.524 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.524 filename0: (groupid=0, jobs=1): err= 0: pid=1727928: Tue Jun 11 12:28:29 2024 00:34:18.524 read: IOPS=525, BW=2103KiB/s (2153kB/s)(20.6MiB/10013msec) 00:34:18.524 slat (usec): min=5, max=116, avg=24.01, stdev=19.03 00:34:18.524 clat (usec): min=5395, max=35313, avg=30251.44, stdev=2368.29 00:34:18.524 lat (usec): min=5403, max=35346, avg=30275.45, stdev=2368.09 00:34:18.524 clat percentiles (usec): 00:34:18.524 | 1.00th=[19268], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:18.524 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.524 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:34:18.524 | 99.00th=[32113], 99.50th=[32375], 99.90th=[32637], 99.95th=[32900], 00:34:18.524 | 99.99th=[35390] 00:34:18.524 bw ( KiB/s): min= 2043, max= 2432, per=4.18%, avg=2098.95, stdev=96.65, samples=20 00:34:18.524 iops : min= 510, max= 608, avg=524.70, stdev=24.19, samples=20 00:34:18.524 lat (msec) : 10=0.91%, 20=0.57%, 50=98.52% 00:34:18.524 cpu : usr=98.48%, sys=0.91%, ctx=157, majf=0, minf=54 00:34:18.524 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.524 filename0: (groupid=0, jobs=1): err= 0: pid=1727929: Tue Jun 11 12:28:29 2024 00:34:18.524 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.4MiB/10014msec) 00:34:18.524 slat (usec): min=5, max=109, avg=35.96, stdev=17.70 00:34:18.524 clat (usec): min=25504, max=38393, avg=30376.94, stdev=640.33 00:34:18.524 lat (usec): min=25510, max=38408, avg=30412.91, stdev=640.82 00:34:18.524 clat percentiles (usec): 00:34:18.524 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:18.524 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:18.524 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.524 | 99.00th=[32113], 99.50th=[32375], 99.90th=[38536], 99.95th=[38536], 00:34:18.524 | 99.99th=[38536] 00:34:18.524 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2079.90, stdev=69.71, samples=20 00:34:18.524 iops : min= 480, max= 544, avg=519.90, stdev=17.47, samples=20 00:34:18.524 lat (msec) : 50=100.00% 00:34:18.524 cpu : usr=99.26%, sys=0.47%, ctx=15, majf=0, minf=38 00:34:18.524 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.524 filename0: (groupid=0, jobs=1): err= 0: pid=1727930: Tue Jun 11 12:28:29 2024 00:34:18.524 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10005msec) 00:34:18.524 slat (usec): min=6, max=118, avg=36.65, stdev=17.09 00:34:18.524 clat (usec): min=25242, max=32683, avg=30360.57, stdev=469.72 00:34:18.524 lat (usec): min=25304, max=32729, avg=30397.22, stdev=470.62 00:34:18.524 clat percentiles (usec): 00:34:18.524 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:18.524 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:18.524 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.524 | 99.00th=[32113], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:34:18.524 | 99.99th=[32637] 00:34:18.524 bw ( KiB/s): min= 2043, max= 2176, per=4.15%, avg=2081.42, stdev=58.08, samples=19 00:34:18.524 iops : min= 510, max= 544, avg=520.32, stdev=14.55, samples=19 00:34:18.524 lat (msec) : 50=100.00% 00:34:18.524 cpu : usr=99.24%, sys=0.49%, ctx=22, majf=0, minf=50 00:34:18.524 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:18.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.524 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename0: (groupid=0, jobs=1): err= 0: pid=1727931: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.3MiB/10017msec) 00:34:18.525 slat (usec): min=5, max=115, avg=27.89, stdev=19.80 00:34:18.525 clat (usec): min=13556, max=55936, avg=30551.40, stdev=3669.36 00:34:18.525 lat (usec): min=13562, max=55941, avg=30579.28, stdev=3670.64 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[19006], 5.00th=[26084], 10.00th=[28705], 20.00th=[30016], 00:34:18.525 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:18.525 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31851], 95.00th=[34866], 00:34:18.525 | 99.00th=[47973], 99.50th=[50070], 99.90th=[55837], 99.95th=[55837], 00:34:18.525 | 99.99th=[55837] 00:34:18.525 bw ( KiB/s): min= 1920, max= 2240, per=4.14%, avg=2077.20, stdev=77.75, samples=20 00:34:18.525 iops : min= 480, max= 560, avg=519.30, stdev=19.44, samples=20 00:34:18.525 lat (msec) : 20=1.81%, 50=97.58%, 100=0.61% 00:34:18.525 cpu : usr=99.13%, sys=0.48%, ctx=140, majf=0, minf=58 00:34:18.525 IO depths : 1=3.4%, 2=7.2%, 4=17.1%, 8=62.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:34:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 complete : 0=0.0%, 4=92.3%, 8=2.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 issued rwts: total=5207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename0: (groupid=0, jobs=1): err= 0: pid=1727932: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=520, BW=2084KiB/s (2134kB/s)(20.4MiB/10013msec) 00:34:18.525 slat (nsec): min=5586, max=48710, avg=12216.46, stdev=7520.56 00:34:18.525 clat (usec): min=21923, max=40283, avg=30608.66, stdev=883.66 00:34:18.525 lat (usec): min=21932, max=40299, avg=30620.88, stdev=883.70 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[29230], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:18.525 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.525 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:34:18.525 | 99.00th=[32113], 99.50th=[33424], 99.90th=[40109], 99.95th=[40109], 00:34:18.525 | 99.99th=[40109] 00:34:18.525 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2079.70, stdev=70.30, samples=20 00:34:18.525 iops : min= 480, max= 544, avg=519.85, stdev=17.69, samples=20 00:34:18.525 lat (msec) : 50=100.00% 00:34:18.525 cpu : usr=97.89%, sys=1.14%, ctx=272, majf=0, minf=36 00:34:18.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename0: (groupid=0, jobs=1): err= 0: pid=1727933: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=525, BW=2101KiB/s (2151kB/s)(20.5MiB/10005msec) 00:34:18.525 slat (usec): min=5, max=110, avg=31.60, stdev=19.29 00:34:18.525 clat (usec): min=10238, max=54711, avg=30182.19, stdev=3725.84 00:34:18.525 lat (usec): min=10246, max=54748, avg=30213.79, stdev=3727.97 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[16057], 5.00th=[25822], 10.00th=[29754], 20.00th=[30016], 00:34:18.525 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:18.525 | 70.00th=[30540], 80.00th=[30540], 90.00th=[31065], 95.00th=[32113], 00:34:18.525 | 99.00th=[46400], 99.50th=[50070], 99.90th=[54789], 99.95th=[54789], 00:34:18.525 | 99.99th=[54789] 00:34:18.525 bw ( KiB/s): min= 1920, max= 2224, per=4.18%, avg=2097.68, stdev=84.31, samples=19 00:34:18.525 iops : min= 480, max= 556, avg=524.42, stdev=21.08, samples=19 00:34:18.525 lat (msec) : 20=2.97%, 50=96.54%, 100=0.49% 00:34:18.525 cpu : usr=99.22%, sys=0.48%, ctx=62, majf=0, minf=55 00:34:18.525 IO depths : 1=4.2%, 2=9.7%, 4=22.4%, 8=55.3%, 16=8.4%, 32=0.0%, >=64=0.0% 00:34:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename0: (groupid=0, jobs=1): err= 0: pid=1727934: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=530, BW=2121KiB/s (2172kB/s)(20.7MiB/10009msec) 00:34:18.525 slat (usec): min=5, max=106, avg=27.74, stdev=16.56 00:34:18.525 clat (usec): min=7758, max=53943, avg=29952.67, stdev=2920.72 00:34:18.525 lat (usec): min=7766, max=53957, avg=29980.41, stdev=2923.64 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[17957], 5.00th=[24511], 10.00th=[30016], 20.00th=[30278], 00:34:18.525 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:18.525 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.525 | 99.00th=[35914], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:34:18.525 | 99.99th=[53740] 00:34:18.525 bw ( KiB/s): min= 2048, max= 2544, per=4.22%, avg=2120.42, stdev=128.04, samples=19 00:34:18.525 iops : min= 512, max= 636, avg=530.11, stdev=32.01, samples=19 00:34:18.525 lat (msec) : 10=0.19%, 20=2.05%, 50=97.74%, 100=0.02% 00:34:18.525 cpu : usr=98.66%, sys=0.84%, ctx=168, majf=0, minf=62 00:34:18.525 IO depths : 1=5.4%, 2=10.9%, 4=22.6%, 8=53.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:34:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 issued rwts: total=5308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename1: (groupid=0, jobs=1): err= 0: pid=1727935: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=521, BW=2084KiB/s (2134kB/s)(20.4MiB/10011msec) 00:34:18.525 slat (usec): min=5, max=108, avg=32.38, stdev=16.07 00:34:18.525 clat (usec): min=25548, max=35333, avg=30437.79, stdev=528.55 00:34:18.525 lat (usec): min=25578, max=35363, avg=30470.17, stdev=526.97 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:18.525 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:18.525 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.525 | 99.00th=[32113], 99.50th=[32637], 99.90th=[35390], 99.95th=[35390], 00:34:18.525 | 99.99th=[35390] 00:34:18.525 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2081.42, stdev=72.07, samples=19 00:34:18.525 iops : min= 480, max= 544, avg=520.32, stdev=18.04, samples=19 00:34:18.525 lat (msec) : 50=100.00% 00:34:18.525 cpu : usr=99.17%, sys=0.55%, ctx=14, majf=0, minf=55 00:34:18.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename1: (groupid=0, jobs=1): err= 0: pid=1727936: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=522, BW=2088KiB/s (2139kB/s)(20.4MiB/10021msec) 00:34:18.525 slat (usec): min=5, max=112, avg=19.80, stdev=16.10 00:34:18.525 clat (usec): min=14204, max=38024, avg=30497.81, stdev=1137.84 00:34:18.525 lat (usec): min=14216, max=38052, avg=30517.61, stdev=1138.24 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[25822], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:18.525 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.525 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:34:18.525 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[36963], 00:34:18.525 | 99.99th=[38011] 00:34:18.525 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2085.90, stdev=59.98, samples=20 00:34:18.525 iops : min= 510, max= 544, avg=521.40, stdev=14.97, samples=20 00:34:18.525 lat (msec) : 20=0.31%, 50=99.69% 00:34:18.525 cpu : usr=98.89%, sys=0.69%, ctx=101, majf=0, minf=55 00:34:18.525 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename1: (groupid=0, jobs=1): err= 0: pid=1727937: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10014msec) 00:34:18.525 slat (nsec): min=5519, max=46019, avg=8827.01, stdev=4746.83 00:34:18.525 clat (usec): min=18798, max=44332, avg=30546.12, stdev=1870.16 00:34:18.525 lat (usec): min=18806, max=44340, avg=30554.95, stdev=1870.35 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[22152], 5.00th=[28181], 10.00th=[30016], 20.00th=[30278], 00:34:18.525 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.525 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:18.525 | 99.00th=[37487], 99.50th=[39584], 99.90th=[41157], 99.95th=[42730], 00:34:18.525 | 99.99th=[44303] 00:34:18.525 bw ( KiB/s): min= 1920, max= 2224, per=4.16%, avg=2086.75, stdev=73.23, samples=20 00:34:18.525 iops : min= 480, max= 556, avg=521.65, stdev=18.33, samples=20 00:34:18.525 lat (msec) : 20=0.59%, 50=99.41% 00:34:18.525 cpu : usr=99.04%, sys=0.53%, ctx=48, majf=0, minf=56 00:34:18.525 IO depths : 1=1.8%, 2=7.6%, 4=23.7%, 8=56.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:34:18.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.525 issued rwts: total=5234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.525 filename1: (groupid=0, jobs=1): err= 0: pid=1727938: Tue Jun 11 12:28:29 2024 00:34:18.525 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.3MiB/10006msec) 00:34:18.525 slat (nsec): min=5545, max=99045, avg=28131.04, stdev=15433.97 00:34:18.525 clat (usec): min=8371, max=54190, avg=30526.06, stdev=2295.06 00:34:18.525 lat (usec): min=8376, max=54210, avg=30554.20, stdev=2294.89 00:34:18.525 clat percentiles (usec): 00:34:18.525 | 1.00th=[25822], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:18.525 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:18.525 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31589], 00:34:18.525 | 99.00th=[35390], 99.50th=[47449], 99.90th=[54264], 99.95th=[54264], 00:34:18.525 | 99.99th=[54264] 00:34:18.525 bw ( KiB/s): min= 1891, max= 2160, per=4.14%, avg=2076.79, stdev=69.82, samples=19 00:34:18.525 iops : min= 472, max= 540, avg=519.16, stdev=17.57, samples=19 00:34:18.526 lat (msec) : 10=0.04%, 20=0.60%, 50=98.87%, 100=0.50% 00:34:18.526 cpu : usr=99.16%, sys=0.56%, ctx=22, majf=0, minf=98 00:34:18.526 IO depths : 1=0.3%, 2=6.3%, 4=24.5%, 8=56.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename1: (groupid=0, jobs=1): err= 0: pid=1727939: Tue Jun 11 12:28:29 2024 00:34:18.526 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10015msec) 00:34:18.526 slat (usec): min=5, max=122, avg=38.42, stdev=21.66 00:34:18.526 clat (usec): min=12505, max=52678, avg=30317.11, stdev=1364.58 00:34:18.526 lat (usec): min=12514, max=52697, avg=30355.53, stdev=1366.42 00:34:18.526 clat percentiles (usec): 00:34:18.526 | 1.00th=[25822], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:18.526 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:18.526 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.526 | 99.00th=[33817], 99.50th=[38536], 99.90th=[44303], 99.95th=[48497], 00:34:18.526 | 99.99th=[52691] 00:34:18.526 bw ( KiB/s): min= 1968, max= 2176, per=4.15%, avg=2082.15, stdev=64.94, samples=20 00:34:18.526 iops : min= 492, max= 544, avg=520.50, stdev=16.18, samples=20 00:34:18.526 lat (msec) : 20=0.15%, 50=99.81%, 100=0.04% 00:34:18.526 cpu : usr=99.12%, sys=0.54%, ctx=60, majf=0, minf=62 00:34:18.526 IO depths : 1=5.5%, 2=11.5%, 4=24.0%, 8=51.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename1: (groupid=0, jobs=1): err= 0: pid=1727940: Tue Jun 11 12:28:29 2024 00:34:18.526 read: IOPS=526, BW=2108KiB/s (2158kB/s)(20.6MiB/10020msec) 00:34:18.526 slat (usec): min=2, max=116, avg=10.49, stdev= 8.72 00:34:18.526 clat (usec): min=2886, max=32702, avg=30276.97, stdev=2681.76 00:34:18.526 lat (usec): min=2891, max=32710, avg=30287.46, stdev=2681.94 00:34:18.526 clat percentiles (usec): 00:34:18.526 | 1.00th=[10028], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:18.526 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.526 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:34:18.526 | 99.00th=[32375], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:34:18.526 | 99.99th=[32637] 00:34:18.526 bw ( KiB/s): min= 2048, max= 2432, per=4.19%, avg=2105.35, stdev=96.99, samples=20 00:34:18.526 iops : min= 512, max= 608, avg=526.30, stdev=24.22, samples=20 00:34:18.526 lat (msec) : 4=0.04%, 10=0.83%, 20=0.74%, 50=98.39% 00:34:18.526 cpu : usr=99.27%, sys=0.45%, ctx=11, majf=0, minf=61 00:34:18.526 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename1: (groupid=0, jobs=1): err= 0: pid=1727941: Tue Jun 11 12:28:29 2024 00:34:18.526 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10001msec) 00:34:18.526 slat (nsec): min=5516, max=67568, avg=12091.38, stdev=8538.76 00:34:18.526 clat (usec): min=8609, max=55547, avg=30567.86, stdev=2601.62 00:34:18.526 lat (usec): min=8616, max=55556, avg=30579.95, stdev=2601.75 00:34:18.526 clat percentiles (usec): 00:34:18.526 | 1.00th=[22938], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:18.526 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.526 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31327], 95.00th=[31327], 00:34:18.526 | 99.00th=[32375], 99.50th=[47973], 99.90th=[54789], 99.95th=[55313], 00:34:18.526 | 99.99th=[55313] 00:34:18.526 bw ( KiB/s): min= 1920, max= 2192, per=4.15%, avg=2081.68, stdev=73.69, samples=19 00:34:18.526 iops : min= 480, max= 548, avg=520.42, stdev=18.42, samples=19 00:34:18.526 lat (msec) : 10=0.35%, 20=0.46%, 50=98.81%, 100=0.38% 00:34:18.526 cpu : usr=99.14%, sys=0.52%, ctx=56, majf=0, minf=67 00:34:18.526 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename1: (groupid=0, jobs=1): err= 0: pid=1727942: Tue Jun 11 12:28:29 2024 00:34:18.526 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10005msec) 00:34:18.526 slat (nsec): min=5519, max=45942, avg=11534.50, stdev=7614.77 00:34:18.526 clat (usec): min=13390, max=40595, avg=30577.42, stdev=1207.37 00:34:18.526 lat (usec): min=13397, max=40613, avg=30588.95, stdev=1207.85 00:34:18.526 clat percentiles (usec): 00:34:18.526 | 1.00th=[29230], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:18.526 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.526 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:34:18.526 | 99.00th=[32113], 99.50th=[33162], 99.90th=[40633], 99.95th=[40633], 00:34:18.526 | 99.99th=[40633] 00:34:18.526 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2081.68, stdev=71.93, samples=19 00:34:18.526 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:18.526 lat (msec) : 20=0.31%, 50=99.69% 00:34:18.526 cpu : usr=99.13%, sys=0.55%, ctx=75, majf=0, minf=64 00:34:18.526 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename2: (groupid=0, jobs=1): err= 0: pid=1727943: Tue Jun 11 12:28:29 2024 00:34:18.526 read: IOPS=520, BW=2084KiB/s (2134kB/s)(20.4MiB/10013msec) 00:34:18.526 slat (usec): min=5, max=117, avg=34.30, stdev=19.77 00:34:18.526 clat (usec): min=12852, max=43665, avg=30367.37, stdev=1426.25 00:34:18.526 lat (usec): min=12858, max=43691, avg=30401.67, stdev=1426.66 00:34:18.526 clat percentiles (usec): 00:34:18.526 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:18.526 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:18.526 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.526 | 99.00th=[32113], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:34:18.526 | 99.99th=[43779] 00:34:18.526 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2074.95, stdev=68.52, samples=19 00:34:18.526 iops : min= 480, max= 544, avg=518.74, stdev=17.13, samples=19 00:34:18.526 lat (msec) : 20=0.31%, 50=99.69% 00:34:18.526 cpu : usr=99.09%, sys=0.55%, ctx=148, majf=0, minf=52 00:34:18.526 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename2: (groupid=0, jobs=1): err= 0: pid=1727944: Tue Jun 11 12:28:29 2024 00:34:18.526 read: IOPS=521, BW=2087KiB/s (2137kB/s)(20.4MiB/10011msec) 00:34:18.526 slat (usec): min=5, max=138, avg=38.31, stdev=19.16 00:34:18.526 clat (usec): min=17357, max=45912, avg=30324.65, stdev=1272.08 00:34:18.526 lat (usec): min=17363, max=45925, avg=30362.97, stdev=1273.99 00:34:18.526 clat percentiles (usec): 00:34:18.526 | 1.00th=[25822], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:34:18.526 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:18.526 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.526 | 99.00th=[32375], 99.50th=[32637], 99.90th=[45351], 99.95th=[45351], 00:34:18.526 | 99.99th=[45876] 00:34:18.526 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2082.55, stdev=69.72, samples=20 00:34:18.526 iops : min= 480, max= 544, avg=520.60, stdev=17.52, samples=20 00:34:18.526 lat (msec) : 20=0.42%, 50=99.58% 00:34:18.526 cpu : usr=99.23%, sys=0.49%, ctx=27, majf=0, minf=42 00:34:18.526 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename2: (groupid=0, jobs=1): err= 0: pid=1727945: Tue Jun 11 12:28:29 2024 00:34:18.526 read: IOPS=522, BW=2088KiB/s (2139kB/s)(20.4MiB/10021msec) 00:34:18.526 slat (usec): min=5, max=118, avg=25.82, stdev=18.83 00:34:18.526 clat (usec): min=19158, max=35415, avg=30452.55, stdev=839.47 00:34:18.526 lat (usec): min=19171, max=35490, avg=30478.37, stdev=838.19 00:34:18.526 clat percentiles (usec): 00:34:18.526 | 1.00th=[26346], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:18.526 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.526 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.526 | 99.00th=[32375], 99.50th=[32637], 99.90th=[33817], 99.95th=[33817], 00:34:18.526 | 99.99th=[35390] 00:34:18.526 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2085.90, stdev=59.98, samples=20 00:34:18.526 iops : min= 510, max= 544, avg=521.40, stdev=14.97, samples=20 00:34:18.526 lat (msec) : 20=0.27%, 50=99.73% 00:34:18.526 cpu : usr=99.19%, sys=0.51%, ctx=42, majf=0, minf=71 00:34:18.526 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:18.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.526 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.526 filename2: (groupid=0, jobs=1): err= 0: pid=1727946: Tue Jun 11 12:28:29 2024 00:34:18.527 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10002msec) 00:34:18.527 slat (nsec): min=5521, max=67350, avg=11036.64, stdev=7814.89 00:34:18.527 clat (usec): min=8901, max=52297, avg=30585.15, stdev=1695.52 00:34:18.527 lat (usec): min=8910, max=52305, avg=30596.19, stdev=1695.75 00:34:18.527 clat percentiles (usec): 00:34:18.527 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:18.527 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.527 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31327], 95.00th=[31327], 00:34:18.527 | 99.00th=[32113], 99.50th=[32375], 99.90th=[46924], 99.95th=[49546], 00:34:18.527 | 99.99th=[52167] 00:34:18.527 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2081.84, stdev=71.56, samples=19 00:34:18.527 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:18.527 lat (msec) : 10=0.04%, 20=0.35%, 50=99.58%, 100=0.04% 00:34:18.527 cpu : usr=98.96%, sys=0.59%, ctx=91, majf=0, minf=69 00:34:18.527 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:18.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.527 filename2: (groupid=0, jobs=1): err= 0: pid=1727947: Tue Jun 11 12:28:29 2024 00:34:18.527 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.4MiB/10014msec) 00:34:18.527 slat (usec): min=5, max=121, avg=35.62, stdev=17.10 00:34:18.527 clat (usec): min=24929, max=38373, avg=30405.28, stdev=650.12 00:34:18.527 lat (usec): min=24964, max=38392, avg=30440.90, stdev=649.13 00:34:18.527 clat percentiles (usec): 00:34:18.527 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:18.527 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:18.527 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.527 | 99.00th=[32375], 99.50th=[32637], 99.90th=[38536], 99.95th=[38536], 00:34:18.527 | 99.99th=[38536] 00:34:18.527 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2079.90, stdev=70.19, samples=20 00:34:18.527 iops : min= 480, max= 544, avg=519.90, stdev=17.66, samples=20 00:34:18.527 lat (msec) : 50=100.00% 00:34:18.527 cpu : usr=99.28%, sys=0.43%, ctx=28, majf=0, minf=35 00:34:18.527 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:18.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.527 filename2: (groupid=0, jobs=1): err= 0: pid=1727948: Tue Jun 11 12:28:29 2024 00:34:18.527 read: IOPS=521, BW=2087KiB/s (2137kB/s)(20.4MiB/10003msec) 00:34:18.527 slat (usec): min=5, max=110, avg=10.46, stdev= 8.00 00:34:18.527 clat (usec): min=3904, max=53744, avg=30627.28, stdev=2678.76 00:34:18.527 lat (usec): min=3910, max=53782, avg=30637.74, stdev=2679.12 00:34:18.527 clat percentiles (usec): 00:34:18.527 | 1.00th=[21627], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:34:18.527 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.527 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31327], 95.00th=[31327], 00:34:18.527 | 99.00th=[40633], 99.50th=[47973], 99.90th=[53740], 99.95th=[53740], 00:34:18.527 | 99.99th=[53740] 00:34:18.527 bw ( KiB/s): min= 1872, max= 2144, per=4.14%, avg=2078.32, stdev=60.08, samples=19 00:34:18.527 iops : min= 468, max= 536, avg=519.58, stdev=15.02, samples=19 00:34:18.527 lat (msec) : 4=0.11%, 10=0.08%, 20=0.80%, 50=98.81%, 100=0.19% 00:34:18.527 cpu : usr=99.02%, sys=0.61%, ctx=118, majf=0, minf=64 00:34:18.527 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=79.5%, 16=18.2%, 32=0.0%, >=64=0.0% 00:34:18.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 complete : 0=0.0%, 4=89.8%, 8=9.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 issued rwts: total=5218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.527 filename2: (groupid=0, jobs=1): err= 0: pid=1727949: Tue Jun 11 12:28:29 2024 00:34:18.527 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10002msec) 00:34:18.527 slat (usec): min=5, max=103, avg=29.09, stdev=17.60 00:34:18.527 clat (usec): min=2215, max=54612, avg=29602.68, stdev=4144.91 00:34:18.527 lat (usec): min=2221, max=54624, avg=29631.78, stdev=4149.03 00:34:18.527 clat percentiles (usec): 00:34:18.527 | 1.00th=[16581], 5.00th=[19006], 10.00th=[28181], 20.00th=[30016], 00:34:18.527 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:18.527 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:18.527 | 99.00th=[44303], 99.50th=[46924], 99.90th=[54789], 99.95th=[54789], 00:34:18.527 | 99.99th=[54789] 00:34:18.527 bw ( KiB/s): min= 1971, max= 2672, per=4.25%, avg=2134.89, stdev=166.31, samples=19 00:34:18.527 iops : min= 492, max= 668, avg=533.68, stdev=41.62, samples=19 00:34:18.527 lat (msec) : 4=0.30%, 10=0.07%, 20=5.30%, 50=93.99%, 100=0.34% 00:34:18.527 cpu : usr=99.19%, sys=0.52%, ctx=42, majf=0, minf=47 00:34:18.527 IO depths : 1=4.9%, 2=10.1%, 4=21.7%, 8=55.4%, 16=7.8%, 32=0.0%, >=64=0.0% 00:34:18.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 issued rwts: total=5358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.527 filename2: (groupid=0, jobs=1): err= 0: pid=1727950: Tue Jun 11 12:28:29 2024 00:34:18.527 read: IOPS=524, BW=2099KiB/s (2150kB/s)(20.5MiB/10003msec) 00:34:18.527 slat (usec): min=5, max=137, avg=21.56, stdev=25.89 00:34:18.527 clat (usec): min=7775, max=52311, avg=30346.58, stdev=3173.30 00:34:18.527 lat (usec): min=7784, max=52320, avg=30368.14, stdev=3173.99 00:34:18.527 clat percentiles (usec): 00:34:18.527 | 1.00th=[16712], 5.00th=[27919], 10.00th=[29754], 20.00th=[30278], 00:34:18.527 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:18.527 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31851], 00:34:18.527 | 99.00th=[41681], 99.50th=[44303], 99.90th=[52167], 99.95th=[52167], 00:34:18.527 | 99.99th=[52167] 00:34:18.527 bw ( KiB/s): min= 1840, max= 2176, per=4.18%, avg=2096.00, stdev=83.31, samples=19 00:34:18.527 iops : min= 460, max= 544, avg=524.00, stdev=20.83, samples=19 00:34:18.527 lat (msec) : 10=0.30%, 20=1.71%, 50=97.83%, 100=0.15% 00:34:18.527 cpu : usr=99.12%, sys=0.59%, ctx=39, majf=0, minf=56 00:34:18.527 IO depths : 1=0.1%, 2=1.8%, 4=7.9%, 8=74.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:34:18.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 complete : 0=0.0%, 4=90.8%, 8=7.1%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.527 issued rwts: total=5250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:18.527 00:34:18.527 Run status group 0 (all jobs): 00:34:18.527 READ: bw=49.0MiB/s (51.4MB/s), 2079KiB/s-2143KiB/s (2129kB/s-2194kB/s), io=491MiB (515MB), run=10001-10021msec 00:34:18.527 12:28:29 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:18.527 12:28:29 -- target/dif.sh@43 -- # local sub 00:34:18.527 12:28:29 -- target/dif.sh@45 -- # for sub in "$@" 00:34:18.527 12:28:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:18.527 12:28:29 -- target/dif.sh@36 -- # local sub_id=0 00:34:18.527 12:28:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.527 12:28:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.527 12:28:29 -- target/dif.sh@45 -- # for sub in "$@" 00:34:18.527 12:28:29 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:18.527 12:28:29 -- target/dif.sh@36 -- # local sub_id=1 00:34:18.527 12:28:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.527 12:28:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.527 12:28:29 -- target/dif.sh@45 -- # for sub in "$@" 00:34:18.527 12:28:29 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:18.527 12:28:29 -- target/dif.sh@36 -- # local sub_id=2 00:34:18.527 12:28:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.527 12:28:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.527 12:28:29 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:18.527 12:28:29 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:18.527 12:28:29 -- target/dif.sh@115 -- # numjobs=2 00:34:18.527 12:28:29 -- target/dif.sh@115 -- # iodepth=8 00:34:18.527 12:28:29 -- target/dif.sh@115 -- # runtime=5 00:34:18.527 12:28:29 -- target/dif.sh@115 -- # files=1 00:34:18.527 12:28:29 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:18.527 12:28:29 -- target/dif.sh@28 -- # local sub 00:34:18.527 12:28:29 -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.527 12:28:29 -- target/dif.sh@31 -- # create_subsystem 0 00:34:18.527 12:28:29 -- target/dif.sh@18 -- # local sub_id=0 00:34:18.527 12:28:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 bdev_null0 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.527 12:28:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:18.527 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.527 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.527 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.528 12:28:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:18.528 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.528 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.528 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.528 12:28:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:18.528 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.528 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.528 [2024-06-11 12:28:29.764348] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.528 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.528 12:28:29 -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.528 12:28:29 -- target/dif.sh@31 -- # create_subsystem 1 00:34:18.528 12:28:29 -- target/dif.sh@18 -- # local sub_id=1 00:34:18.528 12:28:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:18.528 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.528 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.528 bdev_null1 00:34:18.528 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.528 12:28:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:18.528 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.528 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.528 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.528 12:28:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:18.528 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.528 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.528 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.528 12:28:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.528 12:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.528 12:28:29 -- common/autotest_common.sh@10 -- # set +x 00:34:18.528 12:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.528 12:28:29 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:18.528 12:28:29 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:18.528 12:28:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:18.528 12:28:29 -- nvmf/common.sh@520 -- # config=() 00:34:18.528 12:28:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.528 12:28:29 -- nvmf/common.sh@520 -- # local subsystem config 00:34:18.528 12:28:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:18.528 12:28:29 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.528 12:28:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:18.528 { 00:34:18.528 "params": { 00:34:18.528 "name": "Nvme$subsystem", 00:34:18.528 "trtype": "$TEST_TRANSPORT", 00:34:18.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.528 "adrfam": "ipv4", 00:34:18.528 "trsvcid": "$NVMF_PORT", 00:34:18.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.528 "hdgst": ${hdgst:-false}, 00:34:18.528 "ddgst": ${ddgst:-false} 00:34:18.528 }, 00:34:18.528 "method": "bdev_nvme_attach_controller" 00:34:18.528 } 00:34:18.528 EOF 00:34:18.528 )") 00:34:18.528 12:28:29 -- target/dif.sh@82 -- # gen_fio_conf 00:34:18.528 12:28:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:18.528 12:28:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:18.528 12:28:29 -- target/dif.sh@54 -- # local file 00:34:18.528 12:28:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:18.528 12:28:29 -- target/dif.sh@56 -- # cat 00:34:18.528 12:28:29 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.528 12:28:29 -- common/autotest_common.sh@1320 -- # shift 00:34:18.528 12:28:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:18.528 12:28:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.528 12:28:29 -- nvmf/common.sh@542 -- # cat 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.528 12:28:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:18.528 12:28:29 -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:18.528 12:28:29 -- target/dif.sh@73 -- # cat 00:34:18.528 12:28:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:18.528 12:28:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:18.528 { 00:34:18.528 "params": { 00:34:18.528 "name": "Nvme$subsystem", 00:34:18.528 "trtype": "$TEST_TRANSPORT", 00:34:18.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.528 "adrfam": "ipv4", 00:34:18.528 "trsvcid": "$NVMF_PORT", 00:34:18.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.528 "hdgst": ${hdgst:-false}, 00:34:18.528 "ddgst": ${ddgst:-false} 00:34:18.528 }, 00:34:18.528 "method": "bdev_nvme_attach_controller" 00:34:18.528 } 00:34:18.528 EOF 00:34:18.528 )") 00:34:18.528 12:28:29 -- target/dif.sh@72 -- # (( file++ )) 00:34:18.528 12:28:29 -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.528 12:28:29 -- nvmf/common.sh@542 -- # cat 00:34:18.528 12:28:29 -- nvmf/common.sh@544 -- # jq . 00:34:18.528 12:28:29 -- nvmf/common.sh@545 -- # IFS=, 00:34:18.528 12:28:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:18.528 "params": { 00:34:18.528 "name": "Nvme0", 00:34:18.528 "trtype": "tcp", 00:34:18.528 "traddr": "10.0.0.2", 00:34:18.528 "adrfam": "ipv4", 00:34:18.528 "trsvcid": "4420", 00:34:18.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:18.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:18.528 "hdgst": false, 00:34:18.528 "ddgst": false 00:34:18.528 }, 00:34:18.528 "method": "bdev_nvme_attach_controller" 00:34:18.528 },{ 00:34:18.528 "params": { 00:34:18.528 "name": "Nvme1", 00:34:18.528 "trtype": "tcp", 00:34:18.528 "traddr": "10.0.0.2", 00:34:18.528 "adrfam": "ipv4", 00:34:18.528 "trsvcid": "4420", 00:34:18.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.528 "hdgst": false, 00:34:18.528 "ddgst": false 00:34:18.528 }, 00:34:18.528 "method": "bdev_nvme_attach_controller" 00:34:18.528 }' 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:18.528 12:28:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:18.528 12:28:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:18.528 12:28:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:18.528 12:28:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:18.528 12:28:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:18.528 12:28:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.528 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:18.528 ... 00:34:18.528 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:18.528 ... 00:34:18.528 fio-3.35 00:34:18.528 Starting 4 threads 00:34:18.528 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.528 [2024-06-11 12:28:30.779309] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:18.528 [2024-06-11 12:28:30.779361] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:23.807 00:34:23.807 filename0: (groupid=0, jobs=1): err= 0: pid=1730151: Tue Jun 11 12:28:35 2024 00:34:23.807 read: IOPS=2180, BW=17.0MiB/s (17.9MB/s)(85.2MiB/5005msec) 00:34:23.807 slat (nsec): min=5372, max=54155, avg=5883.72, stdev=1550.91 00:34:23.807 clat (usec): min=1887, max=6342, avg=3652.87, stdev=569.42 00:34:23.807 lat (usec): min=1892, max=6348, avg=3658.75, stdev=569.30 00:34:23.807 clat percentiles (usec): 00:34:23.807 | 1.00th=[ 2638], 5.00th=[ 2999], 10.00th=[ 3163], 20.00th=[ 3326], 00:34:23.807 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3621], 00:34:23.807 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4490], 95.00th=[ 5014], 00:34:23.807 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6063], 00:34:23.807 | 99.99th=[ 6325] 00:34:23.807 bw ( KiB/s): min=17248, max=17616, per=25.00%, avg=17439.67, stdev=147.50, samples=9 00:34:23.807 iops : min= 2156, max= 2202, avg=2179.89, stdev=18.48, samples=9 00:34:23.807 lat (msec) : 2=0.03%, 4=86.46%, 10=13.51% 00:34:23.807 cpu : usr=97.22%, sys=2.54%, ctx=10, majf=0, minf=0 00:34:23.807 IO depths : 1=0.1%, 2=0.2%, 4=72.1%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.807 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.807 issued rwts: total=10911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.807 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:23.807 filename0: (groupid=0, jobs=1): err= 0: pid=1730152: Tue Jun 11 12:28:35 2024 00:34:23.807 read: IOPS=2183, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5003msec) 00:34:23.807 slat (nsec): min=5369, max=31269, avg=7888.99, stdev=1855.19 00:34:23.807 clat (usec): min=2024, max=44015, avg=3643.44, stdev=1187.99 00:34:23.807 lat (usec): min=2029, max=44046, avg=3651.33, stdev=1188.07 00:34:23.807 clat percentiles (usec): 00:34:23.807 | 1.00th=[ 2769], 5.00th=[ 3097], 10.00th=[ 3228], 20.00th=[ 3359], 00:34:23.807 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3621], 00:34:23.807 | 70.00th=[ 3654], 80.00th=[ 3752], 90.00th=[ 3982], 95.00th=[ 4883], 00:34:23.807 | 99.00th=[ 5538], 99.50th=[ 5604], 99.90th=[ 6063], 99.95th=[43779], 00:34:23.807 | 99.99th=[43779] 00:34:23.808 bw ( KiB/s): min=16640, max=17984, per=25.08%, avg=17489.78, stdev=424.18, samples=9 00:34:23.808 iops : min= 2080, max= 2248, avg=2186.22, stdev=53.02, samples=9 00:34:23.808 lat (msec) : 4=91.07%, 10=8.85%, 50=0.07% 00:34:23.808 cpu : usr=96.30%, sys=2.78%, ctx=126, majf=0, minf=9 00:34:23.808 IO depths : 1=0.1%, 2=0.2%, 4=69.3%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.808 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.808 issued rwts: total=10924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.808 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:23.808 filename1: (groupid=0, jobs=1): err= 0: pid=1730153: Tue Jun 11 12:28:35 2024 00:34:23.808 read: IOPS=2151, BW=16.8MiB/s (17.6MB/s)(84.1MiB/5003msec) 00:34:23.808 slat (nsec): min=5363, max=54703, avg=5882.70, stdev=1453.59 00:34:23.808 clat (usec): min=1891, max=6500, avg=3703.02, stdev=578.54 00:34:23.808 lat (usec): min=1898, max=6533, avg=3708.91, stdev=578.47 00:34:23.808 clat percentiles (usec): 00:34:23.808 | 1.00th=[ 2737], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3359], 00:34:23.808 | 30.00th=[ 3425], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3621], 00:34:23.808 | 70.00th=[ 3654], 80.00th=[ 3884], 90.00th=[ 4555], 95.00th=[ 5211], 00:34:23.808 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 6259], 00:34:23.808 | 99.99th=[ 6456] 00:34:23.808 bw ( KiB/s): min=17024, max=17520, per=24.65%, avg=17194.67, stdev=152.21, samples=9 00:34:23.808 iops : min= 2128, max= 2190, avg=2149.33, stdev=19.03, samples=9 00:34:23.808 lat (msec) : 2=0.03%, 4=87.06%, 10=12.92% 00:34:23.808 cpu : usr=97.72%, sys=2.08%, ctx=6, majf=0, minf=9 00:34:23.808 IO depths : 1=0.1%, 2=0.1%, 4=72.9%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.808 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.808 issued rwts: total=10762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.808 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:23.808 filename1: (groupid=0, jobs=1): err= 0: pid=1730154: Tue Jun 11 12:28:35 2024 00:34:23.808 read: IOPS=2206, BW=17.2MiB/s (18.1MB/s)(86.2MiB/5002msec) 00:34:23.808 slat (nsec): min=5380, max=80275, avg=6188.92, stdev=2746.50 00:34:23.808 clat (usec): min=1132, max=6077, avg=3608.05, stdev=651.35 00:34:23.808 lat (usec): min=1143, max=6083, avg=3614.24, stdev=651.05 00:34:23.808 clat percentiles (usec): 00:34:23.808 | 1.00th=[ 2540], 5.00th=[ 2769], 10.00th=[ 2999], 20.00th=[ 3195], 00:34:23.808 | 30.00th=[ 3294], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3589], 00:34:23.808 | 70.00th=[ 3654], 80.00th=[ 3818], 90.00th=[ 4686], 95.00th=[ 5145], 00:34:23.808 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5932], 99.95th=[ 5997], 00:34:23.808 | 99.99th=[ 6063] 00:34:23.808 bw ( KiB/s): min=17088, max=18240, per=25.33%, avg=17667.56, stdev=303.56, samples=9 00:34:23.808 iops : min= 2136, max= 2280, avg=2208.44, stdev=37.94, samples=9 00:34:23.808 lat (msec) : 2=0.18%, 4=84.22%, 10=15.60% 00:34:23.808 cpu : usr=95.62%, sys=3.38%, ctx=98, majf=0, minf=0 00:34:23.808 IO depths : 1=0.1%, 2=0.5%, 4=71.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.808 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.808 issued rwts: total=11037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.808 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:23.808 00:34:23.808 Run status group 0 (all jobs): 00:34:23.808 READ: bw=68.1MiB/s (71.4MB/s), 16.8MiB/s-17.2MiB/s (17.6MB/s-18.1MB/s), io=341MiB (357MB), run=5002-5005msec 00:34:23.808 12:28:36 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:23.808 12:28:36 -- target/dif.sh@43 -- # local sub 00:34:23.808 12:28:36 -- target/dif.sh@45 -- # for sub in "$@" 00:34:23.808 12:28:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:23.808 12:28:36 -- target/dif.sh@36 -- # local sub_id=0 00:34:23.808 12:28:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 12:28:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 12:28:36 -- target/dif.sh@45 -- # for sub in "$@" 00:34:23.808 12:28:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:23.808 12:28:36 -- target/dif.sh@36 -- # local sub_id=1 00:34:23.808 12:28:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 12:28:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 00:34:23.808 real 0m24.097s 00:34:23.808 user 5m16.952s 00:34:23.808 sys 0m3.517s 00:34:23.808 12:28:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 ************************************ 00:34:23.808 END TEST fio_dif_rand_params 00:34:23.808 ************************************ 00:34:23.808 12:28:36 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:23.808 12:28:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:23.808 12:28:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 ************************************ 00:34:23.808 START TEST fio_dif_digest 00:34:23.808 ************************************ 00:34:23.808 12:28:36 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:34:23.808 12:28:36 -- target/dif.sh@123 -- # local NULL_DIF 00:34:23.808 12:28:36 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:23.808 12:28:36 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:23.808 12:28:36 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:23.808 12:28:36 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:23.808 12:28:36 -- target/dif.sh@127 -- # numjobs=3 00:34:23.808 12:28:36 -- target/dif.sh@127 -- # iodepth=3 00:34:23.808 12:28:36 -- target/dif.sh@127 -- # runtime=10 00:34:23.808 12:28:36 -- target/dif.sh@128 -- # hdgst=true 00:34:23.808 12:28:36 -- target/dif.sh@128 -- # ddgst=true 00:34:23.808 12:28:36 -- target/dif.sh@130 -- # create_subsystems 0 00:34:23.808 12:28:36 -- target/dif.sh@28 -- # local sub 00:34:23.808 12:28:36 -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.808 12:28:36 -- target/dif.sh@31 -- # create_subsystem 0 00:34:23.808 12:28:36 -- target/dif.sh@18 -- # local sub_id=0 00:34:23.808 12:28:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 bdev_null0 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 12:28:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 12:28:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 12:28:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.808 12:28:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.808 12:28:36 -- common/autotest_common.sh@10 -- # set +x 00:34:23.808 [2024-06-11 12:28:36.166680] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.808 12:28:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.808 12:28:36 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:23.809 12:28:36 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:23.809 12:28:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:23.809 12:28:36 -- nvmf/common.sh@520 -- # config=() 00:34:23.809 12:28:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.809 12:28:36 -- nvmf/common.sh@520 -- # local subsystem config 00:34:23.809 12:28:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:23.809 12:28:36 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.809 12:28:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:23.809 { 00:34:23.809 "params": { 00:34:23.809 "name": "Nvme$subsystem", 00:34:23.809 "trtype": "$TEST_TRANSPORT", 00:34:23.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.809 "adrfam": "ipv4", 00:34:23.809 "trsvcid": "$NVMF_PORT", 00:34:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.809 "hdgst": ${hdgst:-false}, 00:34:23.809 "ddgst": ${ddgst:-false} 00:34:23.809 }, 00:34:23.809 "method": "bdev_nvme_attach_controller" 00:34:23.809 } 00:34:23.809 EOF 00:34:23.809 )") 00:34:23.809 12:28:36 -- target/dif.sh@82 -- # gen_fio_conf 00:34:23.809 12:28:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:23.809 12:28:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.809 12:28:36 -- target/dif.sh@54 -- # local file 00:34:23.809 12:28:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:23.809 12:28:36 -- target/dif.sh@56 -- # cat 00:34:23.809 12:28:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.809 12:28:36 -- common/autotest_common.sh@1320 -- # shift 00:34:23.809 12:28:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:23.809 12:28:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.809 12:28:36 -- nvmf/common.sh@542 -- # cat 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.809 12:28:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:23.809 12:28:36 -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:23.809 12:28:36 -- nvmf/common.sh@544 -- # jq . 00:34:23.809 12:28:36 -- nvmf/common.sh@545 -- # IFS=, 00:34:23.809 12:28:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:23.809 "params": { 00:34:23.809 "name": "Nvme0", 00:34:23.809 "trtype": "tcp", 00:34:23.809 "traddr": "10.0.0.2", 00:34:23.809 "adrfam": "ipv4", 00:34:23.809 "trsvcid": "4420", 00:34:23.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.809 "hdgst": true, 00:34:23.809 "ddgst": true 00:34:23.809 }, 00:34:23.809 "method": "bdev_nvme_attach_controller" 00:34:23.809 }' 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:23.809 12:28:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:23.809 12:28:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:23.809 12:28:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:23.809 12:28:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:23.809 12:28:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:23.809 12:28:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.809 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:23.809 ... 00:34:23.809 fio-3.35 00:34:23.809 Starting 3 threads 00:34:23.809 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.069 [2024-06-11 12:28:36.930424] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:24.069 [2024-06-11 12:28:36.930473] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:36.299 00:34:36.299 filename0: (groupid=0, jobs=1): err= 0: pid=1731680: Tue Jun 11 12:28:47 2024 00:34:36.299 read: IOPS=258, BW=32.3MiB/s (33.8MB/s)(324MiB/10045msec) 00:34:36.299 slat (nsec): min=5646, max=73958, avg=7834.27, stdev=1831.97 00:34:36.299 clat (usec): min=6547, max=56138, avg=11590.34, stdev=2273.17 00:34:36.299 lat (usec): min=6554, max=56145, avg=11598.18, stdev=2273.32 00:34:36.299 clat percentiles (usec): 00:34:36.299 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:34:36.299 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11600], 60.00th=[12256], 00:34:36.299 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14091], 95.00th=[14615], 00:34:36.299 | 99.00th=[15664], 99.50th=[15926], 99.90th=[20579], 99.95th=[49546], 00:34:36.299 | 99.99th=[56361] 00:34:36.299 bw ( KiB/s): min=30720, max=36352, per=39.84%, avg=33177.60, stdev=1427.52, samples=20 00:34:36.299 iops : min= 240, max= 284, avg=259.20, stdev=11.15, samples=20 00:34:36.299 lat (msec) : 10=29.65%, 20=70.24%, 50=0.08%, 100=0.04% 00:34:36.299 cpu : usr=95.45%, sys=4.27%, ctx=30, majf=0, minf=160 00:34:36.299 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.299 issued rwts: total=2594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.299 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:36.299 filename0: (groupid=0, jobs=1): err= 0: pid=1731681: Tue Jun 11 12:28:47 2024 00:34:36.299 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(319MiB/10047msec) 00:34:36.299 slat (nsec): min=5624, max=43588, avg=7366.65, stdev=1484.57 00:34:36.299 clat (usec): min=7547, max=54800, avg=11790.25, stdev=3058.27 00:34:36.299 lat (usec): min=7556, max=54806, avg=11797.62, stdev=3058.19 00:34:36.299 clat percentiles (usec): 00:34:36.299 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:34:36.299 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11863], 60.00th=[12649], 00:34:36.299 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14091], 95.00th=[14615], 00:34:36.299 | 99.00th=[15401], 99.50th=[15926], 99.90th=[54789], 99.95th=[54789], 00:34:36.299 | 99.99th=[54789] 00:34:36.299 bw ( KiB/s): min=28160, max=35584, per=39.18%, avg=32627.20, stdev=1856.24, samples=20 00:34:36.299 iops : min= 220, max= 278, avg=254.90, stdev=14.50, samples=20 00:34:36.299 lat (msec) : 10=28.93%, 20=70.76%, 100=0.31% 00:34:36.299 cpu : usr=96.94%, sys=2.84%, ctx=15, majf=0, minf=158 00:34:36.299 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.299 issued rwts: total=2551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.299 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:36.299 filename0: (groupid=0, jobs=1): err= 0: pid=1731682: Tue Jun 11 12:28:47 2024 00:34:36.299 read: IOPS=138, BW=17.3MiB/s (18.2MB/s)(174MiB/10046msec) 00:34:36.299 slat (nsec): min=5706, max=33704, avg=7404.84, stdev=1353.65 00:34:36.299 clat (msec): min=8, max=135, avg=21.61, stdev=17.34 00:34:36.299 lat (msec): min=8, max=135, avg=21.62, stdev=17.34 00:34:36.299 clat percentiles (msec): 00:34:36.299 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:34:36.299 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:34:36.299 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 54], 95.00th=[ 55], 00:34:36.299 | 99.00th=[ 57], 99.50th=[ 95], 99.90th=[ 96], 99.95th=[ 136], 00:34:36.299 | 99.99th=[ 136] 00:34:36.299 bw ( KiB/s): min=11264, max=25344, per=21.36%, avg=17792.00, stdev=3399.78, samples=20 00:34:36.299 iops : min= 88, max= 198, avg=139.00, stdev=26.56, samples=20 00:34:36.299 lat (msec) : 10=0.43%, 20=79.89%, 50=0.29%, 100=19.32%, 250=0.07% 00:34:36.299 cpu : usr=97.17%, sys=2.28%, ctx=640, majf=0, minf=151 00:34:36.299 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.299 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.299 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:36.299 00:34:36.299 Run status group 0 (all jobs): 00:34:36.299 READ: bw=81.3MiB/s (85.3MB/s), 17.3MiB/s-32.3MiB/s (18.2MB/s-33.8MB/s), io=817MiB (857MB), run=10045-10047msec 00:34:36.299 12:28:47 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:36.299 12:28:47 -- target/dif.sh@43 -- # local sub 00:34:36.299 12:28:47 -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.299 12:28:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:36.299 12:28:47 -- target/dif.sh@36 -- # local sub_id=0 00:34:36.299 12:28:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:36.299 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.299 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:34:36.299 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.299 12:28:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:36.299 12:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.299 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:34:36.299 12:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.299 00:34:36.299 real 0m11.120s 00:34:36.299 user 0m42.626s 00:34:36.299 sys 0m1.263s 00:34:36.299 12:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.299 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:34:36.299 ************************************ 00:34:36.299 END TEST fio_dif_digest 00:34:36.299 ************************************ 00:34:36.299 12:28:47 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:36.299 12:28:47 -- target/dif.sh@147 -- # nvmftestfini 00:34:36.299 12:28:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:36.299 12:28:47 -- nvmf/common.sh@116 -- # sync 00:34:36.299 12:28:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:36.299 12:28:47 -- nvmf/common.sh@119 -- # set +e 00:34:36.299 12:28:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:36.299 12:28:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:36.299 rmmod nvme_tcp 00:34:36.299 rmmod nvme_fabrics 00:34:36.299 rmmod nvme_keyring 00:34:36.299 12:28:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:36.299 12:28:47 -- nvmf/common.sh@123 -- # set -e 00:34:36.299 12:28:47 -- nvmf/common.sh@124 -- # return 0 00:34:36.299 12:28:47 -- nvmf/common.sh@477 -- # '[' -n 1721112 ']' 00:34:36.299 12:28:47 -- nvmf/common.sh@478 -- # killprocess 1721112 00:34:36.299 12:28:47 -- common/autotest_common.sh@926 -- # '[' -z 1721112 ']' 00:34:36.299 12:28:47 -- common/autotest_common.sh@930 -- # kill -0 1721112 00:34:36.299 12:28:47 -- common/autotest_common.sh@931 -- # uname 00:34:36.299 12:28:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:36.299 12:28:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1721112 00:34:36.299 12:28:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:36.299 12:28:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:36.299 12:28:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1721112' 00:34:36.299 killing process with pid 1721112 00:34:36.299 12:28:47 -- common/autotest_common.sh@945 -- # kill 1721112 00:34:36.299 12:28:47 -- common/autotest_common.sh@950 -- # wait 1721112 00:34:36.299 12:28:47 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:36.299 12:28:47 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:38.212 Waiting for block devices as requested 00:34:38.212 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:38.212 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:38.212 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:38.212 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:38.212 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:38.498 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:38.498 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:38.498 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:38.766 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:38.766 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:38.766 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:39.026 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:39.026 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:39.026 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:39.026 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:39.289 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:39.289 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:39.289 12:28:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:39.289 12:28:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:39.289 12:28:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:39.289 12:28:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:39.289 12:28:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.289 12:28:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:39.289 12:28:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.828 12:28:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:41.828 00:34:41.828 real 1m16.486s 00:34:41.828 user 8m2.422s 00:34:41.828 sys 0m18.107s 00:34:41.828 12:28:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.828 12:28:54 -- common/autotest_common.sh@10 -- # set +x 00:34:41.828 ************************************ 00:34:41.828 END TEST nvmf_dif 00:34:41.828 ************************************ 00:34:41.828 12:28:54 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:41.828 12:28:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:41.828 12:28:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:41.828 12:28:54 -- common/autotest_common.sh@10 -- # set +x 00:34:41.828 ************************************ 00:34:41.828 START TEST nvmf_abort_qd_sizes 00:34:41.828 ************************************ 00:34:41.828 12:28:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:41.828 * Looking for test storage... 00:34:41.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:41.828 12:28:54 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.828 12:28:54 -- nvmf/common.sh@7 -- # uname -s 00:34:41.828 12:28:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.828 12:28:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.828 12:28:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.828 12:28:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.828 12:28:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.828 12:28:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.828 12:28:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.828 12:28:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.828 12:28:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.828 12:28:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.828 12:28:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:41.828 12:28:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:41.828 12:28:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.828 12:28:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.828 12:28:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.828 12:28:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.828 12:28:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.828 12:28:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.828 12:28:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.828 12:28:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.828 12:28:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.828 12:28:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.828 12:28:54 -- paths/export.sh@5 -- # export PATH 00:34:41.829 12:28:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.829 12:28:54 -- nvmf/common.sh@46 -- # : 0 00:34:41.829 12:28:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:41.829 12:28:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:41.829 12:28:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:41.829 12:28:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.829 12:28:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.829 12:28:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:41.829 12:28:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:41.829 12:28:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:41.829 12:28:54 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:34:41.829 12:28:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:41.829 12:28:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.829 12:28:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:41.829 12:28:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:41.829 12:28:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:41.829 12:28:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.829 12:28:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:41.829 12:28:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.829 12:28:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:41.829 12:28:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:41.829 12:28:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:41.829 12:28:54 -- common/autotest_common.sh@10 -- # set +x 00:34:48.413 12:29:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:48.413 12:29:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:48.413 12:29:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:48.413 12:29:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:48.413 12:29:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:48.413 12:29:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:48.413 12:29:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:48.413 12:29:01 -- nvmf/common.sh@294 -- # net_devs=() 00:34:48.413 12:29:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:48.413 12:29:01 -- nvmf/common.sh@295 -- # e810=() 00:34:48.413 12:29:01 -- nvmf/common.sh@295 -- # local -ga e810 00:34:48.413 12:29:01 -- nvmf/common.sh@296 -- # x722=() 00:34:48.413 12:29:01 -- nvmf/common.sh@296 -- # local -ga x722 00:34:48.413 12:29:01 -- nvmf/common.sh@297 -- # mlx=() 00:34:48.413 12:29:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:48.413 12:29:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.413 12:29:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:48.413 12:29:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:48.413 12:29:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:48.413 12:29:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:48.413 12:29:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:48.413 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:48.413 12:29:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:48.413 12:29:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:48.413 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:48.413 12:29:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:48.413 12:29:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:48.413 12:29:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:48.413 12:29:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.413 12:29:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:48.413 12:29:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.413 12:29:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:48.413 Found net devices under 0000:31:00.0: cvl_0_0 00:34:48.413 12:29:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.413 12:29:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:48.413 12:29:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.413 12:29:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:48.413 12:29:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.414 12:29:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:48.414 Found net devices under 0000:31:00.1: cvl_0_1 00:34:48.414 12:29:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.414 12:29:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:48.414 12:29:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:48.414 12:29:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:48.414 12:29:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:48.414 12:29:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:48.414 12:29:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.414 12:29:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.414 12:29:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.414 12:29:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:48.414 12:29:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.414 12:29:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.414 12:29:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:48.414 12:29:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.414 12:29:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.414 12:29:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:48.414 12:29:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:48.414 12:29:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.414 12:29:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.675 12:29:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.675 12:29:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.675 12:29:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:48.675 12:29:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.675 12:29:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.675 12:29:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.675 12:29:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:48.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:34:48.675 00:34:48.675 --- 10.0.0.2 ping statistics --- 00:34:48.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.675 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:34:48.675 12:29:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:34:48.675 00:34:48.675 --- 10.0.0.1 ping statistics --- 00:34:48.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.675 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:34:48.675 12:29:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.675 12:29:01 -- nvmf/common.sh@410 -- # return 0 00:34:48.675 12:29:01 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:48.675 12:29:01 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:52.880 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:52.880 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:52.880 12:29:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.880 12:29:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:52.880 12:29:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:52.880 12:29:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.880 12:29:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:52.880 12:29:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:52.880 12:29:05 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:34:52.880 12:29:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:52.880 12:29:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:52.880 12:29:05 -- common/autotest_common.sh@10 -- # set +x 00:34:52.880 12:29:05 -- nvmf/common.sh@469 -- # nvmfpid=1741121 00:34:52.880 12:29:05 -- nvmf/common.sh@470 -- # waitforlisten 1741121 00:34:52.880 12:29:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:52.880 12:29:05 -- common/autotest_common.sh@819 -- # '[' -z 1741121 ']' 00:34:52.880 12:29:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.880 12:29:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:52.880 12:29:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.880 12:29:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:52.880 12:29:05 -- common/autotest_common.sh@10 -- # set +x 00:34:52.880 [2024-06-11 12:29:05.412699] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:34:52.880 [2024-06-11 12:29:05.412748] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.880 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.880 [2024-06-11 12:29:05.480796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:52.880 [2024-06-11 12:29:05.514560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:52.880 [2024-06-11 12:29:05.514696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.880 [2024-06-11 12:29:05.514707] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.880 [2024-06-11 12:29:05.514716] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.880 [2024-06-11 12:29:05.514862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.880 [2024-06-11 12:29:05.515001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:52.880 [2024-06-11 12:29:05.515174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:52.880 [2024-06-11 12:29:05.515268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.449 12:29:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:53.449 12:29:06 -- common/autotest_common.sh@852 -- # return 0 00:34:53.449 12:29:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:53.449 12:29:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:53.449 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:34:53.449 12:29:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:34:53.449 12:29:06 -- scripts/common.sh@311 -- # local bdf bdfs 00:34:53.449 12:29:06 -- scripts/common.sh@312 -- # local nvmes 00:34:53.449 12:29:06 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:34:53.449 12:29:06 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:53.449 12:29:06 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:34:53.449 12:29:06 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:34:53.449 12:29:06 -- scripts/common.sh@322 -- # uname -s 00:34:53.449 12:29:06 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:34:53.449 12:29:06 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:34:53.449 12:29:06 -- scripts/common.sh@327 -- # (( 1 )) 00:34:53.449 12:29:06 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:34:53.449 12:29:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:53.449 12:29:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:53.449 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:34:53.449 ************************************ 00:34:53.449 START TEST spdk_target_abort 00:34:53.449 ************************************ 00:34:53.449 12:29:06 -- common/autotest_common.sh@1104 -- # spdk_target 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:53.449 12:29:06 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:34:53.449 12:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.449 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:34:53.710 spdk_targetn1 00:34:53.710 12:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:53.710 12:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.710 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:34:53.710 [2024-06-11 12:29:06.548011] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.710 12:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:34:53.710 12:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.710 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:34:53.710 12:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:34:53.710 12:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.710 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:34:53.710 12:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:34:53.710 12:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.710 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:34:53.710 [2024-06-11 12:29:06.588293] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.710 12:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:53.710 12:29:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:53.710 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.970 [2024-06-11 12:29:06.782456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1432 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:34:53.970 [2024-06-11 12:29:06.782480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b4 p:1 m:0 dnr:0 00:34:53.970 [2024-06-11 12:29:06.807144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2392 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:34:53.970 [2024-06-11 12:29:06.807163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.970 [2024-06-11 12:29:06.822475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2944 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:34:53.970 [2024-06-11 12:29:06.822492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.970 [2024-06-11 12:29:06.830474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3248 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:34:53.970 [2024-06-11 12:29:06.830489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0098 p:0 m:0 dnr:0 00:34:53.970 [2024-06-11 12:29:06.831738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3344 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:34:53.970 [2024-06-11 12:29:06.831751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00a4 p:0 m:0 dnr:0 00:34:57.268 Initializing NVMe Controllers 00:34:57.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:57.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:57.268 Initialization complete. Launching workers. 00:34:57.268 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 13055, failed: 5 00:34:57.268 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 3085, failed to submit 9975 00:34:57.268 success 737, unsuccess 2348, failed 0 00:34:57.268 12:29:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:57.268 12:29:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:57.268 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.268 [2024-06-11 12:29:09.945159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:688 len:8 PRP1 0x200007c54000 PRP2 0x0 00:34:57.268 [2024-06-11 12:29:09.945198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:34:57.268 [2024-06-11 12:29:09.985178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:1552 len:8 PRP1 0x200007c52000 PRP2 0x0 00:34:57.268 [2024-06-11 12:29:09.985203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00c8 p:1 m:0 dnr:0 00:34:57.268 [2024-06-11 12:29:09.992552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:1728 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:34:57.268 [2024-06-11 12:29:09.992573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00e1 p:1 m:0 dnr:0 00:34:57.268 [2024-06-11 12:29:10.008659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2096 len:8 PRP1 0x200007c44000 PRP2 0x0 00:34:57.268 [2024-06-11 12:29:10.008694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:57.268 [2024-06-11 12:29:10.024045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:2544 len:8 PRP1 0x200007c58000 PRP2 0x0 00:34:57.268 [2024-06-11 12:29:10.024071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:59.181 [2024-06-11 12:29:12.215053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:52584 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:34:59.181 [2024-06-11 12:29:12.215094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00b0 p:0 m:0 dnr:0 00:34:59.751 [2024-06-11 12:29:12.580031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:60816 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:34:59.751 [2024-06-11 12:29:12.580060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00b6 p:0 m:0 dnr:0 00:35:00.012 [2024-06-11 12:29:12.955054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.012 [2024-06-11 12:29:12.955151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f640 is same with the state(5) to be set 00:35:00.272 Initializing NVMe Controllers 00:35:00.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:00.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:00.272 Initialization complete. Launching workers. 00:35:00.272 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8524, failed: 7 00:35:00.272 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1238, failed to submit 7293 00:35:00.272 success 358, unsuccess 880, failed 0 00:35:00.272 12:29:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:00.272 12:29:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:00.272 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.570 Initializing NVMe Controllers 00:35:03.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:03.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:03.570 Initialization complete. Launching workers. 00:35:03.570 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 43686, failed: 0 00:35:03.570 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2878, failed to submit 40808 00:35:03.570 success 598, unsuccess 2280, failed 0 00:35:03.570 12:29:16 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:03.570 12:29:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.570 12:29:16 -- common/autotest_common.sh@10 -- # set +x 00:35:03.570 12:29:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.570 12:29:16 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:03.570 12:29:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.570 12:29:16 -- common/autotest_common.sh@10 -- # set +x 00:35:05.487 12:29:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:05.487 12:29:18 -- target/abort_qd_sizes.sh@62 -- # killprocess 1741121 00:35:05.487 12:29:18 -- common/autotest_common.sh@926 -- # '[' -z 1741121 ']' 00:35:05.487 12:29:18 -- common/autotest_common.sh@930 -- # kill -0 1741121 00:35:05.487 12:29:18 -- common/autotest_common.sh@931 -- # uname 00:35:05.487 12:29:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:05.487 12:29:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1741121 00:35:05.487 12:29:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:05.487 12:29:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:05.487 12:29:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1741121' 00:35:05.487 killing process with pid 1741121 00:35:05.487 12:29:18 -- common/autotest_common.sh@945 -- # kill 1741121 00:35:05.487 12:29:18 -- common/autotest_common.sh@950 -- # wait 1741121 00:35:05.487 00:35:05.487 real 0m12.027s 00:35:05.487 user 0m49.076s 00:35:05.487 sys 0m1.676s 00:35:05.487 12:29:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:05.487 12:29:18 -- common/autotest_common.sh@10 -- # set +x 00:35:05.487 ************************************ 00:35:05.487 END TEST spdk_target_abort 00:35:05.487 ************************************ 00:35:05.487 12:29:18 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:05.487 12:29:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:05.487 12:29:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:05.487 12:29:18 -- common/autotest_common.sh@10 -- # set +x 00:35:05.487 ************************************ 00:35:05.487 START TEST kernel_target_abort 00:35:05.487 ************************************ 00:35:05.487 12:29:18 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:05.487 12:29:18 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:05.487 12:29:18 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:05.487 12:29:18 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:05.487 12:29:18 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:05.487 12:29:18 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:05.487 12:29:18 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:05.487 12:29:18 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:05.487 12:29:18 -- nvmf/common.sh@627 -- # local block nvme 00:35:05.487 12:29:18 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:05.487 12:29:18 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:05.487 12:29:18 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:05.487 12:29:18 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:08.789 Waiting for block devices as requested 00:35:08.789 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:09.049 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:09.050 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:09.050 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:09.309 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:09.309 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:09.309 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:09.309 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:09.568 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:09.568 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:09.568 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:09.828 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:09.828 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:09.828 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:09.828 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:10.089 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:10.089 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:10.089 12:29:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:10.089 12:29:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:10.089 12:29:22 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:10.089 12:29:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:10.089 12:29:22 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:10.089 No valid GPT data, bailing 00:35:10.089 12:29:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:10.089 12:29:23 -- scripts/common.sh@393 -- # pt= 00:35:10.089 12:29:23 -- scripts/common.sh@394 -- # return 1 00:35:10.089 12:29:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:10.089 12:29:23 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:10.089 12:29:23 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:10.089 12:29:23 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:10.089 12:29:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:10.089 12:29:23 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:10.089 12:29:23 -- nvmf/common.sh@654 -- # echo 1 00:35:10.089 12:29:23 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:10.089 12:29:23 -- nvmf/common.sh@656 -- # echo 1 00:35:10.089 12:29:23 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:10.090 12:29:23 -- nvmf/common.sh@663 -- # echo tcp 00:35:10.090 12:29:23 -- nvmf/common.sh@664 -- # echo 4420 00:35:10.090 12:29:23 -- nvmf/common.sh@665 -- # echo ipv4 00:35:10.090 12:29:23 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:10.090 12:29:23 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:10.090 00:35:10.090 Discovery Log Number of Records 2, Generation counter 2 00:35:10.090 =====Discovery Log Entry 0====== 00:35:10.090 trtype: tcp 00:35:10.090 adrfam: ipv4 00:35:10.090 subtype: current discovery subsystem 00:35:10.090 treq: not specified, sq flow control disable supported 00:35:10.090 portid: 1 00:35:10.090 trsvcid: 4420 00:35:10.090 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:10.090 traddr: 10.0.0.1 00:35:10.090 eflags: none 00:35:10.090 sectype: none 00:35:10.090 =====Discovery Log Entry 1====== 00:35:10.090 trtype: tcp 00:35:10.090 adrfam: ipv4 00:35:10.090 subtype: nvme subsystem 00:35:10.090 treq: not specified, sq flow control disable supported 00:35:10.090 portid: 1 00:35:10.090 trsvcid: 4420 00:35:10.090 subnqn: kernel_target 00:35:10.090 traddr: 10.0.0.1 00:35:10.090 eflags: none 00:35:10.090 sectype: none 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:10.090 12:29:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:10.349 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.645 Initializing NVMe Controllers 00:35:13.645 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:13.645 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:13.645 Initialization complete. Launching workers. 00:35:13.645 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 69420, failed: 0 00:35:13.645 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 69420, failed to submit 0 00:35:13.645 success 0, unsuccess 69420, failed 0 00:35:13.645 12:29:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:13.645 12:29:26 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:13.645 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.937 Initializing NVMe Controllers 00:35:16.938 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:16.938 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:16.938 Initialization complete. Launching workers. 00:35:16.938 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 111324, failed: 0 00:35:16.938 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28034, failed to submit 83290 00:35:16.938 success 0, unsuccess 28034, failed 0 00:35:16.938 12:29:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:16.938 12:29:29 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:16.938 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.548 Initializing NVMe Controllers 00:35:19.548 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:19.548 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:19.548 Initialization complete. Launching workers. 00:35:19.548 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 106265, failed: 0 00:35:19.548 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26566, failed to submit 79699 00:35:19.548 success 0, unsuccess 26566, failed 0 00:35:19.548 12:29:32 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:19.548 12:29:32 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:19.548 12:29:32 -- nvmf/common.sh@677 -- # echo 0 00:35:19.548 12:29:32 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:19.548 12:29:32 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:19.548 12:29:32 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:19.548 12:29:32 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:19.548 12:29:32 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:19.548 12:29:32 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:19.548 00:35:19.548 real 0m14.073s 00:35:19.548 user 0m8.270s 00:35:19.548 sys 0m3.422s 00:35:19.548 12:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.548 12:29:32 -- common/autotest_common.sh@10 -- # set +x 00:35:19.548 ************************************ 00:35:19.548 END TEST kernel_target_abort 00:35:19.548 ************************************ 00:35:19.548 12:29:32 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:19.548 12:29:32 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:19.548 12:29:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:19.548 12:29:32 -- nvmf/common.sh@116 -- # sync 00:35:19.549 12:29:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:19.549 12:29:32 -- nvmf/common.sh@119 -- # set +e 00:35:19.549 12:29:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:19.549 12:29:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:19.549 rmmod nvme_tcp 00:35:19.549 rmmod nvme_fabrics 00:35:19.549 rmmod nvme_keyring 00:35:19.549 12:29:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:19.549 12:29:32 -- nvmf/common.sh@123 -- # set -e 00:35:19.549 12:29:32 -- nvmf/common.sh@124 -- # return 0 00:35:19.549 12:29:32 -- nvmf/common.sh@477 -- # '[' -n 1741121 ']' 00:35:19.549 12:29:32 -- nvmf/common.sh@478 -- # killprocess 1741121 00:35:19.549 12:29:32 -- common/autotest_common.sh@926 -- # '[' -z 1741121 ']' 00:35:19.549 12:29:32 -- common/autotest_common.sh@930 -- # kill -0 1741121 00:35:19.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1741121) - No such process 00:35:19.549 12:29:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1741121 is not found' 00:35:19.549 Process with pid 1741121 is not found 00:35:19.549 12:29:32 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:19.549 12:29:32 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:23.771 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:65:00.0 (144d a80a): Already using the nvme driver 00:35:23.771 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:35:23.771 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:35:23.771 12:29:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:23.771 12:29:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:23.771 12:29:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:23.771 12:29:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:23.771 12:29:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.771 12:29:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:23.771 12:29:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.679 12:29:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:25.679 00:35:25.679 real 0m44.143s 00:35:25.679 user 1m2.506s 00:35:25.679 sys 0m15.459s 00:35:25.679 12:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:25.679 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:35:25.679 ************************************ 00:35:25.679 END TEST nvmf_abort_qd_sizes 00:35:25.679 ************************************ 00:35:25.679 12:29:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:25.679 12:29:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:25.679 12:29:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:25.679 12:29:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:25.679 12:29:38 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:35:25.679 12:29:38 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:25.679 12:29:38 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:25.679 12:29:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:25.679 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:35:25.679 12:29:38 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:25.679 12:29:38 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:25.679 12:29:38 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:25.679 12:29:38 -- common/autotest_common.sh@10 -- # set +x 00:35:33.808 INFO: APP EXITING 00:35:33.808 INFO: killing all VMs 00:35:33.808 INFO: killing vhost app 00:35:33.808 INFO: EXIT DONE 00:35:36.346 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:35:36.346 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:35:36.346 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:35:36.346 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:35:36.346 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:65:00.0 (144d a80a): Already using the nvme driver 00:35:36.347 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:35:36.347 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:35:39.639 Cleaning 00:35:39.639 Removing: /var/run/dpdk/spdk0/config 00:35:39.639 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:39.639 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:39.639 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:39.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:39.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:39.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:39.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:39.899 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:39.899 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:39.899 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:39.899 Removing: /var/run/dpdk/spdk1/config 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:39.899 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:39.899 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:39.899 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:39.899 Removing: /var/run/dpdk/spdk2/config 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:39.899 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:39.899 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:39.899 Removing: /var/run/dpdk/spdk3/config 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:39.899 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:39.899 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:39.899 Removing: /var/run/dpdk/spdk4/config 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:39.899 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:39.899 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:39.899 Removing: /dev/shm/bdev_svc_trace.1 00:35:39.899 Removing: /dev/shm/nvmf_trace.0 00:35:39.899 Removing: /dev/shm/spdk_tgt_trace.pid1260951 00:35:39.899 Removing: /var/run/dpdk/spdk0 00:35:39.899 Removing: /var/run/dpdk/spdk1 00:35:39.899 Removing: /var/run/dpdk/spdk2 00:35:39.899 Removing: /var/run/dpdk/spdk3 00:35:40.159 Removing: /var/run/dpdk/spdk4 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1259471 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1260951 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1261819 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1262704 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1263427 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1263818 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1264202 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1264558 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1264766 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1265032 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1265385 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1265721 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1266859 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1270850 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1271072 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1271418 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1271747 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1272140 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1272215 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1272783 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1272869 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1273232 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1273356 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1273608 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1273766 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1274279 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1274421 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1274804 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1275172 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1275197 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1275253 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1275587 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1275913 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1276013 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1276313 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1276647 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1276943 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1277054 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1277377 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1277712 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1277975 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1278100 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1278435 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1278768 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1279024 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1279142 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1279493 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1279827 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1280037 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1280195 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1280552 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1280889 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1281075 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1281256 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1281605 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1281941 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1282099 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1282312 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1282663 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1282995 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1283136 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1283371 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1283723 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1284060 00:35:40.159 Removing: /var/run/dpdk/spdk_pid1284195 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1284445 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1284797 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1285125 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1285258 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1285501 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1285856 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1285924 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1286325 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1290858 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1388483 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1393624 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1405546 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1412649 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1417624 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1418213 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1425316 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1425403 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1426457 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1427486 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1428526 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1429167 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1429314 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1429518 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1429671 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1429679 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1430711 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1431728 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1432744 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1433422 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1433430 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1433766 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1435203 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1436394 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1446447 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1446803 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1451941 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1458911 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1462443 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1474795 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1485660 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1487689 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1488717 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1509305 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1513864 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1519786 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1521822 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1523872 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1524204 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1524465 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1524570 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1525296 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1527558 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1528503 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1529138 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1535958 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1542412 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1548475 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1593783 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1598587 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1605823 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1607727 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1609511 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1614685 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1619788 00:35:40.419 Removing: /var/run/dpdk/spdk_pid1628723 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1628804 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1633847 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1634003 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1634199 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1634713 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1634870 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1636202 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1638264 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1640170 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1642084 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1644042 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1646044 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1653530 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1654279 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1655323 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1656811 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1663487 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1666648 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1673130 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1680017 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1686979 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1687680 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1688367 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1689061 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1690101 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1690814 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1691500 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1692193 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1697335 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1697673 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1704815 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1705009 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1708156 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1715500 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1715505 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1721489 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1723714 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1726227 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1727449 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1729994 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1731252 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1741350 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1742026 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1742696 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1745437 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1746054 00:35:40.678 Removing: /var/run/dpdk/spdk_pid1746730 00:35:40.678 Clean 00:35:40.938 killing process with pid 1203158 00:35:50.933 killing process with pid 1203154 00:35:50.933 killing process with pid 1203156 00:35:50.933 killing process with pid 1203155 00:35:50.933 12:30:03 -- common/autotest_common.sh@1436 -- # return 0 00:35:50.933 12:30:03 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:50.933 12:30:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:50.933 12:30:03 -- common/autotest_common.sh@10 -- # set +x 00:35:50.933 12:30:03 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:50.933 12:30:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:50.933 12:30:03 -- common/autotest_common.sh@10 -- # set +x 00:35:50.933 12:30:03 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:50.933 12:30:03 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:50.933 12:30:03 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:50.933 12:30:03 -- spdk/autotest.sh@394 -- # hash lcov 00:35:51.192 12:30:03 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:51.192 12:30:03 -- spdk/autotest.sh@396 -- # hostname 00:35:51.193 12:30:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:51.193 geninfo: WARNING: invalid characters removed from testname! 00:36:17.752 12:30:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:17.753 12:30:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:18.694 12:30:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:20.602 12:30:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:21.980 12:30:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:23.361 12:30:36 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:24.777 12:30:37 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:25.038 12:30:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:25.038 12:30:37 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:25.038 12:30:37 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.038 12:30:37 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.038 12:30:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.038 12:30:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.038 12:30:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.038 12:30:37 -- paths/export.sh@5 -- $ export PATH 00:36:25.038 12:30:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.038 12:30:37 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:25.038 12:30:37 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:25.038 12:30:37 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718101837.XXXXXX 00:36:25.038 12:30:37 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718101837.WrMPfl 00:36:25.038 12:30:37 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:25.038 12:30:37 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:36:25.038 12:30:37 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:25.038 12:30:37 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:25.038 12:30:37 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:25.038 12:30:37 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:25.038 12:30:37 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:25.038 12:30:37 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:25.038 12:30:37 -- common/autotest_common.sh@10 -- $ set +x 00:36:25.038 12:30:37 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:25.038 12:30:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:36:25.038 12:30:37 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:25.038 12:30:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:25.038 12:30:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:25.038 12:30:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:25.038 12:30:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:25.038 12:30:37 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:25.038 12:30:37 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:25.038 12:30:37 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:25.038 12:30:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:25.038 + [[ -n 1148773 ]] 00:36:25.038 + sudo kill 1148773 00:36:25.049 [Pipeline] } 00:36:25.070 [Pipeline] // stage 00:36:25.076 [Pipeline] } 00:36:25.094 [Pipeline] // timeout 00:36:25.099 [Pipeline] } 00:36:25.117 [Pipeline] // catchError 00:36:25.123 [Pipeline] } 00:36:25.140 [Pipeline] // wrap 00:36:25.146 [Pipeline] } 00:36:25.162 [Pipeline] // catchError 00:36:25.171 [Pipeline] stage 00:36:25.173 [Pipeline] { (Epilogue) 00:36:25.188 [Pipeline] catchError 00:36:25.189 [Pipeline] { 00:36:25.204 [Pipeline] echo 00:36:25.206 Cleanup processes 00:36:25.211 [Pipeline] sh 00:36:25.496 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:25.496 1763773 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:25.510 [Pipeline] sh 00:36:25.796 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:25.796 ++ grep -v 'sudo pgrep' 00:36:25.796 ++ awk '{print $1}' 00:36:25.796 + sudo kill -9 00:36:25.796 + true 00:36:25.808 [Pipeline] sh 00:36:26.094 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:38.322 [Pipeline] sh 00:36:38.607 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:38.607 Artifacts sizes are good 00:36:38.621 [Pipeline] archiveArtifacts 00:36:38.628 Archiving artifacts 00:36:38.875 [Pipeline] sh 00:36:39.169 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:39.184 [Pipeline] cleanWs 00:36:39.197 [WS-CLEANUP] Deleting project workspace... 00:36:39.197 [WS-CLEANUP] Deferred wipeout is used... 00:36:39.208 [WS-CLEANUP] done 00:36:39.211 [Pipeline] } 00:36:39.255 [Pipeline] // catchError 00:36:39.264 [Pipeline] sh 00:36:39.561 + logger -p user.info -t JENKINS-CI 00:36:39.571 [Pipeline] } 00:36:39.586 [Pipeline] // stage 00:36:39.592 [Pipeline] } 00:36:39.607 [Pipeline] // node 00:36:39.611 [Pipeline] End of Pipeline 00:36:39.648 Finished: SUCCESS